00:00:00.002 Started by upstream project "autotest-per-patch" build number 132317 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.096 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.097 The recommended git tool is: git 00:00:00.097 using credential 00000000-0000-0000-0000-000000000002 00:00:00.098 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.169 Fetching changes from the remote Git repository 00:00:00.170 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.237 Using shallow fetch with depth 1 00:00:00.237 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.237 > git --version # timeout=10 00:00:00.293 > git --version # 'git version 2.39.2' 00:00:00.293 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.336 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.337 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.829 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.842 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.854 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.854 > git config core.sparsecheckout # timeout=10 00:00:05.865 > git read-tree -mu HEAD # timeout=10 00:00:05.881 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.907 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.907 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.006 [Pipeline] Start of Pipeline 00:00:06.016 [Pipeline] library 00:00:06.017 Loading library shm_lib@master 00:00:06.018 Library shm_lib@master is cached. Copying from home. 00:00:06.035 [Pipeline] node 00:00:06.042 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.045 [Pipeline] { 00:00:06.055 [Pipeline] catchError 00:00:06.056 [Pipeline] { 00:00:06.069 [Pipeline] wrap 00:00:06.079 [Pipeline] { 00:00:06.088 [Pipeline] stage 00:00:06.090 [Pipeline] { (Prologue) 00:00:06.303 [Pipeline] sh 00:00:06.584 + logger -p user.info -t JENKINS-CI 00:00:06.600 [Pipeline] echo 00:00:06.601 Node: GP6 00:00:06.607 [Pipeline] sh 00:00:06.904 [Pipeline] setCustomBuildProperty 00:00:06.914 [Pipeline] echo 00:00:06.915 Cleanup processes 00:00:06.918 [Pipeline] sh 00:00:07.197 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.197 1142656 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.208 [Pipeline] sh 00:00:07.489 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.489 ++ grep -v 'sudo pgrep' 00:00:07.489 ++ awk '{print $1}' 00:00:07.489 + sudo kill -9 00:00:07.489 + true 00:00:07.502 [Pipeline] cleanWs 00:00:07.511 [WS-CLEANUP] Deleting project workspace... 00:00:07.512 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.518 [WS-CLEANUP] done 00:00:07.521 [Pipeline] setCustomBuildProperty 00:00:07.533 [Pipeline] sh 00:00:07.816 + sudo git config --global --replace-all safe.directory '*' 00:00:07.905 [Pipeline] httpRequest 00:00:08.734 [Pipeline] echo 00:00:08.736 Sorcerer 10.211.164.20 is alive 00:00:08.744 [Pipeline] retry 00:00:08.746 [Pipeline] { 00:00:08.758 [Pipeline] httpRequest 00:00:08.762 HttpMethod: GET 00:00:08.763 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.763 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.780 Response Code: HTTP/1.1 200 OK 00:00:08.780 Success: Status code 200 is in the accepted range: 200,404 00:00:08.781 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.704 [Pipeline] } 00:00:17.722 [Pipeline] // retry 00:00:17.730 [Pipeline] sh 00:00:18.011 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:18.026 [Pipeline] httpRequest 00:00:18.418 [Pipeline] echo 00:00:18.420 Sorcerer 10.211.164.20 is alive 00:00:18.430 [Pipeline] retry 00:00:18.432 [Pipeline] { 00:00:18.447 [Pipeline] httpRequest 00:00:18.452 HttpMethod: GET 00:00:18.453 URL: http://10.211.164.20/packages/spdk_53ca6a88509de90de88d1fa95d7fbe9678bc6467.tar.gz 00:00:18.453 Sending request to url: http://10.211.164.20/packages/spdk_53ca6a88509de90de88d1fa95d7fbe9678bc6467.tar.gz 00:00:18.461 Response Code: HTTP/1.1 200 OK 00:00:18.461 Success: Status code 200 is in the accepted range: 200,404 00:00:18.462 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_53ca6a88509de90de88d1fa95d7fbe9678bc6467.tar.gz 00:03:35.842 [Pipeline] } 00:03:35.860 [Pipeline] // retry 00:03:35.868 [Pipeline] sh 00:03:36.162 + tar --no-same-owner -xf spdk_53ca6a88509de90de88d1fa95d7fbe9678bc6467.tar.gz 00:03:38.714 [Pipeline] sh 00:03:39.004 + git -C spdk log --oneline -n5 00:03:39.004 53ca6a885 bdev/nvme: Rearrange fields in spdk_bdev_nvme_opts to reduce holes. 00:03:39.004 03b7aa9c7 bdev/nvme: Move the spdk_bdev_nvme_opts and spdk_bdev_timeout_action struct to the public header. 00:03:39.004 d47eb51c9 bdev: fix a race between reset start and complete 00:03:39.004 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:03:39.004 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:03:39.016 [Pipeline] } 00:03:39.031 [Pipeline] // stage 00:03:39.041 [Pipeline] stage 00:03:39.043 [Pipeline] { (Prepare) 00:03:39.060 [Pipeline] writeFile 00:03:39.076 [Pipeline] sh 00:03:39.406 + logger -p user.info -t JENKINS-CI 00:03:39.421 [Pipeline] sh 00:03:39.717 + logger -p user.info -t JENKINS-CI 00:03:39.730 [Pipeline] sh 00:03:40.023 + cat autorun-spdk.conf 00:03:40.023 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:40.023 SPDK_TEST_NVMF=1 00:03:40.023 SPDK_TEST_NVME_CLI=1 00:03:40.023 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:40.023 SPDK_TEST_NVMF_NICS=e810 00:03:40.023 SPDK_TEST_VFIOUSER=1 00:03:40.023 SPDK_RUN_UBSAN=1 00:03:40.023 NET_TYPE=phy 00:03:40.032 RUN_NIGHTLY=0 00:03:40.036 [Pipeline] readFile 00:03:40.062 [Pipeline] withEnv 00:03:40.065 [Pipeline] { 00:03:40.077 [Pipeline] sh 00:03:40.368 + set -ex 00:03:40.368 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:40.368 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:40.368 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:40.368 ++ SPDK_TEST_NVMF=1 00:03:40.368 ++ SPDK_TEST_NVME_CLI=1 00:03:40.368 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:40.368 ++ SPDK_TEST_NVMF_NICS=e810 00:03:40.368 ++ SPDK_TEST_VFIOUSER=1 00:03:40.368 ++ SPDK_RUN_UBSAN=1 00:03:40.368 ++ NET_TYPE=phy 00:03:40.368 ++ RUN_NIGHTLY=0 00:03:40.368 + case $SPDK_TEST_NVMF_NICS in 00:03:40.368 + DRIVERS=ice 00:03:40.368 + [[ tcp == \r\d\m\a ]] 00:03:40.368 + [[ -n ice ]] 00:03:40.368 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:40.368 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:40.368 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:40.368 rmmod: ERROR: Module irdma is not currently loaded 00:03:40.368 rmmod: ERROR: Module i40iw is not currently loaded 00:03:40.368 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:40.368 + true 00:03:40.368 + for D in $DRIVERS 00:03:40.368 + sudo modprobe ice 00:03:40.368 + exit 0 00:03:40.379 [Pipeline] } 00:03:40.393 [Pipeline] // withEnv 00:03:40.398 [Pipeline] } 00:03:40.411 [Pipeline] // stage 00:03:40.421 [Pipeline] catchError 00:03:40.422 [Pipeline] { 00:03:40.436 [Pipeline] timeout 00:03:40.436 Timeout set to expire in 1 hr 0 min 00:03:40.438 [Pipeline] { 00:03:40.451 [Pipeline] stage 00:03:40.453 [Pipeline] { (Tests) 00:03:40.467 [Pipeline] sh 00:03:40.758 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:40.758 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:40.758 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:40.758 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:40.758 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:40.758 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:40.758 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:40.758 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:40.758 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:40.758 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:40.758 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:40.758 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:40.758 + source /etc/os-release 00:03:40.758 ++ NAME='Fedora Linux' 00:03:40.758 ++ VERSION='39 (Cloud Edition)' 00:03:40.758 ++ ID=fedora 00:03:40.758 ++ VERSION_ID=39 00:03:40.758 ++ VERSION_CODENAME= 00:03:40.758 ++ PLATFORM_ID=platform:f39 00:03:40.758 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:40.758 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:40.758 ++ LOGO=fedora-logo-icon 00:03:40.758 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:40.758 ++ HOME_URL=https://fedoraproject.org/ 00:03:40.758 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:40.758 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:40.758 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:40.758 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:40.758 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:40.758 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:40.758 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:40.758 ++ SUPPORT_END=2024-11-12 00:03:40.758 ++ VARIANT='Cloud Edition' 00:03:40.758 ++ VARIANT_ID=cloud 00:03:40.758 + uname -a 00:03:40.758 Linux spdk-gp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:40.758 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:41.702 Hugepages 00:03:41.702 node hugesize free / total 00:03:41.702 node0 1048576kB 0 / 0 00:03:41.702 node0 2048kB 0 / 0 00:03:41.702 node1 1048576kB 0 / 0 00:03:41.702 node1 2048kB 0 / 0 00:03:41.702 00:03:41.702 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:41.702 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:41.702 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:41.702 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:41.702 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:41.702 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:41.702 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:41.702 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:41.961 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:41.961 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:41.961 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:41.961 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:41.961 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:41.961 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:41.961 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:41.961 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:41.961 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:41.961 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:41.961 + rm -f /tmp/spdk-ld-path 00:03:41.961 + source autorun-spdk.conf 00:03:41.961 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:41.961 ++ SPDK_TEST_NVMF=1 00:03:41.961 ++ SPDK_TEST_NVME_CLI=1 00:03:41.961 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:41.961 ++ SPDK_TEST_NVMF_NICS=e810 00:03:41.961 ++ SPDK_TEST_VFIOUSER=1 00:03:41.961 ++ SPDK_RUN_UBSAN=1 00:03:41.961 ++ NET_TYPE=phy 00:03:41.961 ++ RUN_NIGHTLY=0 00:03:41.961 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:41.961 + [[ -n '' ]] 00:03:41.961 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:41.961 + for M in /var/spdk/build-*-manifest.txt 00:03:41.961 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:41.961 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:41.961 + for M in /var/spdk/build-*-manifest.txt 00:03:41.962 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:41.962 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:41.962 + for M in /var/spdk/build-*-manifest.txt 00:03:41.962 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:41.962 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:41.962 ++ uname 00:03:41.962 + [[ Linux == \L\i\n\u\x ]] 00:03:41.962 + sudo dmesg -T 00:03:41.962 + sudo dmesg --clear 00:03:41.962 + dmesg_pid=1143982 00:03:41.962 + [[ Fedora Linux == FreeBSD ]] 00:03:41.962 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:41.962 + sudo dmesg -Tw 00:03:41.962 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:41.962 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:41.962 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:03:41.962 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:03:41.962 + [[ -x /usr/src/fio-static/fio ]] 00:03:41.962 + export FIO_BIN=/usr/src/fio-static/fio 00:03:41.962 + FIO_BIN=/usr/src/fio-static/fio 00:03:41.962 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:41.962 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:41.962 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:41.962 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:41.962 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:41.962 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:41.962 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:41.962 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:41.962 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:41.962 10:31:29 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:41.962 10:31:29 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:41.962 10:31:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:41.962 10:31:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:41.962 10:31:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:03:41.962 10:31:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:41.962 10:31:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:03:41.962 10:31:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:03:41.962 10:31:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:03:41.962 10:31:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:03:41.962 10:31:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:03:41.962 10:31:29 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:41.962 10:31:29 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:41.962 10:31:29 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:41.962 10:31:29 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:41.962 10:31:29 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:41.962 10:31:29 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:41.962 10:31:29 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:41.962 10:31:29 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:41.962 10:31:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.962 10:31:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.962 10:31:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.962 10:31:29 -- paths/export.sh@5 -- $ export PATH 00:03:41.962 10:31:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.962 10:31:29 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:41.962 10:31:29 -- common/autobuild_common.sh@486 -- $ date +%s 00:03:41.962 10:31:29 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732008689.XXXXXX 00:03:41.962 10:31:29 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732008689.fMqBNs 00:03:41.962 10:31:29 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:03:41.962 10:31:29 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:03:41.962 10:31:29 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:41.962 10:31:29 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:41.962 10:31:29 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:41.962 10:31:29 -- common/autobuild_common.sh@502 -- $ get_config_params 00:03:41.962 10:31:29 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:41.962 10:31:29 -- common/autotest_common.sh@10 -- $ set +x 00:03:42.221 10:31:29 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:42.221 10:31:29 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:03:42.221 10:31:29 -- pm/common@17 -- $ local monitor 00:03:42.221 10:31:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.221 10:31:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.221 10:31:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.221 10:31:29 -- pm/common@21 -- $ date +%s 00:03:42.221 10:31:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.221 10:31:29 -- pm/common@21 -- $ date +%s 00:03:42.221 10:31:29 -- pm/common@25 -- $ sleep 1 00:03:42.221 10:31:29 -- pm/common@21 -- $ date +%s 00:03:42.221 10:31:29 -- pm/common@21 -- $ date +%s 00:03:42.221 10:31:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732008689 00:03:42.221 10:31:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732008689 00:03:42.221 10:31:29 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732008689 00:03:42.221 10:31:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732008689 00:03:42.221 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732008689_collect-cpu-load.pm.log 00:03:42.221 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732008689_collect-vmstat.pm.log 00:03:42.221 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732008689_collect-cpu-temp.pm.log 00:03:42.222 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732008689_collect-bmc-pm.bmc.pm.log 00:03:43.165 10:31:30 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:03:43.165 10:31:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:43.165 10:31:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:43.165 10:31:30 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:43.165 10:31:30 -- spdk/autobuild.sh@16 -- $ date -u 00:03:43.165 Tue Nov 19 09:31:30 AM UTC 2024 00:03:43.165 10:31:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:43.165 v25.01-pre-192-g53ca6a885 00:03:43.165 10:31:30 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:43.165 10:31:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:43.165 10:31:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:43.165 10:31:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:43.165 10:31:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:43.165 10:31:30 -- common/autotest_common.sh@10 -- $ set +x 00:03:43.165 ************************************ 00:03:43.165 START TEST ubsan 00:03:43.165 ************************************ 00:03:43.165 10:31:30 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:43.165 using ubsan 00:03:43.165 00:03:43.165 real 0m0.000s 00:03:43.165 user 0m0.000s 00:03:43.165 sys 0m0.000s 00:03:43.165 10:31:30 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:43.165 10:31:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:43.165 ************************************ 00:03:43.165 END TEST ubsan 00:03:43.165 ************************************ 00:03:43.165 10:31:30 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:43.165 10:31:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:43.165 10:31:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:43.165 10:31:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:43.165 10:31:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:43.165 10:31:30 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:43.165 10:31:30 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:43.165 10:31:30 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:43.165 10:31:30 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:43.165 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:43.165 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:43.426 Using 'verbs' RDMA provider 00:03:54.361 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:04.356 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:04.356 Creating mk/config.mk...done. 00:04:04.356 Creating mk/cc.flags.mk...done. 00:04:04.356 Type 'make' to build. 00:04:04.356 10:31:51 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:04:04.356 10:31:51 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:04.356 10:31:51 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:04.356 10:31:51 -- common/autotest_common.sh@10 -- $ set +x 00:04:04.356 ************************************ 00:04:04.356 START TEST make 00:04:04.356 ************************************ 00:04:04.356 10:31:51 make -- common/autotest_common.sh@1129 -- $ make -j48 00:04:04.622 make[1]: Nothing to be done for 'all'. 00:04:06.545 The Meson build system 00:04:06.545 Version: 1.5.0 00:04:06.545 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:06.545 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:06.545 Build type: native build 00:04:06.545 Project name: libvfio-user 00:04:06.545 Project version: 0.0.1 00:04:06.545 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:06.545 C linker for the host machine: cc ld.bfd 2.40-14 00:04:06.545 Host machine cpu family: x86_64 00:04:06.545 Host machine cpu: x86_64 00:04:06.545 Run-time dependency threads found: YES 00:04:06.545 Library dl found: YES 00:04:06.545 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:06.545 Run-time dependency json-c found: YES 0.17 00:04:06.545 Run-time dependency cmocka found: YES 1.1.7 00:04:06.545 Program pytest-3 found: NO 00:04:06.545 Program flake8 found: NO 00:04:06.545 Program misspell-fixer found: NO 00:04:06.545 Program restructuredtext-lint found: NO 00:04:06.545 Program valgrind found: YES (/usr/bin/valgrind) 00:04:06.545 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:06.545 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:06.545 Compiler for C supports arguments -Wwrite-strings: YES 00:04:06.545 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:06.545 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:06.545 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:06.545 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:06.545 Build targets in project: 8 00:04:06.545 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:06.545 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:06.545 00:04:06.545 libvfio-user 0.0.1 00:04:06.545 00:04:06.545 User defined options 00:04:06.545 buildtype : debug 00:04:06.545 default_library: shared 00:04:06.545 libdir : /usr/local/lib 00:04:06.545 00:04:06.545 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:07.130 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:07.392 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:07.392 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:07.392 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:07.392 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:07.392 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:07.392 [6/37] Compiling C object samples/null.p/null.c.o 00:04:07.392 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:07.392 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:07.392 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:07.392 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:07.392 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:07.392 [12/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:07.392 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:07.392 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:07.392 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:07.392 [16/37] Compiling C object samples/server.p/server.c.o 00:04:07.392 [17/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:07.654 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:07.654 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:07.654 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:07.654 [21/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:07.654 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:07.654 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:07.654 [24/37] Compiling C object samples/client.p/client.c.o 00:04:07.654 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:07.654 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:07.654 [27/37] Linking target samples/client 00:04:07.654 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:07.654 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:07.917 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:04:07.917 [31/37] Linking target test/unit_tests 00:04:07.918 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:08.181 [33/37] Linking target samples/server 00:04:08.181 [34/37] Linking target samples/null 00:04:08.181 [35/37] Linking target samples/lspci 00:04:08.181 [36/37] Linking target samples/gpio-pci-idio-16 00:04:08.181 [37/37] Linking target samples/shadow_ioeventfd_server 00:04:08.181 INFO: autodetecting backend as ninja 00:04:08.181 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:08.181 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:09.125 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:09.125 ninja: no work to do. 00:04:14.400 The Meson build system 00:04:14.400 Version: 1.5.0 00:04:14.400 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:04:14.400 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:04:14.401 Build type: native build 00:04:14.401 Program cat found: YES (/usr/bin/cat) 00:04:14.401 Project name: DPDK 00:04:14.401 Project version: 24.03.0 00:04:14.401 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:14.401 C linker for the host machine: cc ld.bfd 2.40-14 00:04:14.401 Host machine cpu family: x86_64 00:04:14.401 Host machine cpu: x86_64 00:04:14.401 Message: ## Building in Developer Mode ## 00:04:14.401 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:14.401 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:04:14.401 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:14.401 Program python3 found: YES (/usr/bin/python3) 00:04:14.401 Program cat found: YES (/usr/bin/cat) 00:04:14.401 Compiler for C supports arguments -march=native: YES 00:04:14.401 Checking for size of "void *" : 8 00:04:14.401 Checking for size of "void *" : 8 (cached) 00:04:14.401 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:14.401 Library m found: YES 00:04:14.401 Library numa found: YES 00:04:14.401 Has header "numaif.h" : YES 00:04:14.401 Library fdt found: NO 00:04:14.401 Library execinfo found: NO 00:04:14.401 Has header "execinfo.h" : YES 00:04:14.401 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:14.401 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:14.401 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:14.401 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:14.401 Run-time dependency openssl found: YES 3.1.1 00:04:14.401 Run-time dependency libpcap found: YES 1.10.4 00:04:14.401 Has header "pcap.h" with dependency libpcap: YES 00:04:14.401 Compiler for C supports arguments -Wcast-qual: YES 00:04:14.401 Compiler for C supports arguments -Wdeprecated: YES 00:04:14.401 Compiler for C supports arguments -Wformat: YES 00:04:14.401 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:14.401 Compiler for C supports arguments -Wformat-security: NO 00:04:14.401 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:14.401 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:14.401 Compiler for C supports arguments -Wnested-externs: YES 00:04:14.401 Compiler for C supports arguments -Wold-style-definition: YES 00:04:14.401 Compiler for C supports arguments -Wpointer-arith: YES 00:04:14.401 Compiler for C supports arguments -Wsign-compare: YES 00:04:14.401 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:14.401 Compiler for C supports arguments -Wundef: YES 00:04:14.401 Compiler for C supports arguments -Wwrite-strings: YES 00:04:14.401 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:14.401 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:14.401 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:14.401 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:14.401 Program objdump found: YES (/usr/bin/objdump) 00:04:14.401 Compiler for C supports arguments -mavx512f: YES 00:04:14.401 Checking if "AVX512 checking" compiles: YES 00:04:14.401 Fetching value of define "__SSE4_2__" : 1 00:04:14.401 Fetching value of define "__AES__" : 1 00:04:14.401 Fetching value of define "__AVX__" : 1 00:04:14.401 Fetching value of define "__AVX2__" : (undefined) 00:04:14.401 Fetching value of define "__AVX512BW__" : (undefined) 00:04:14.401 Fetching value of define "__AVX512CD__" : (undefined) 00:04:14.401 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:14.401 Fetching value of define "__AVX512F__" : (undefined) 00:04:14.401 Fetching value of define "__AVX512VL__" : (undefined) 00:04:14.401 Fetching value of define "__PCLMUL__" : 1 00:04:14.401 Fetching value of define "__RDRND__" : 1 00:04:14.401 Fetching value of define "__RDSEED__" : (undefined) 00:04:14.401 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:14.401 Fetching value of define "__znver1__" : (undefined) 00:04:14.401 Fetching value of define "__znver2__" : (undefined) 00:04:14.401 Fetching value of define "__znver3__" : (undefined) 00:04:14.401 Fetching value of define "__znver4__" : (undefined) 00:04:14.401 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:14.401 Message: lib/log: Defining dependency "log" 00:04:14.401 Message: lib/kvargs: Defining dependency "kvargs" 00:04:14.401 Message: lib/telemetry: Defining dependency "telemetry" 00:04:14.401 Checking for function "getentropy" : NO 00:04:14.401 Message: lib/eal: Defining dependency "eal" 00:04:14.401 Message: lib/ring: Defining dependency "ring" 00:04:14.401 Message: lib/rcu: Defining dependency "rcu" 00:04:14.401 Message: lib/mempool: Defining dependency "mempool" 00:04:14.401 Message: lib/mbuf: Defining dependency "mbuf" 00:04:14.401 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:14.401 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:14.401 Compiler for C supports arguments -mpclmul: YES 00:04:14.401 Compiler for C supports arguments -maes: YES 00:04:14.401 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:14.401 Compiler for C supports arguments -mavx512bw: YES 00:04:14.401 Compiler for C supports arguments -mavx512dq: YES 00:04:14.401 Compiler for C supports arguments -mavx512vl: YES 00:04:14.401 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:14.401 Compiler for C supports arguments -mavx2: YES 00:04:14.401 Compiler for C supports arguments -mavx: YES 00:04:14.401 Message: lib/net: Defining dependency "net" 00:04:14.401 Message: lib/meter: Defining dependency "meter" 00:04:14.401 Message: lib/ethdev: Defining dependency "ethdev" 00:04:14.401 Message: lib/pci: Defining dependency "pci" 00:04:14.401 Message: lib/cmdline: Defining dependency "cmdline" 00:04:14.401 Message: lib/hash: Defining dependency "hash" 00:04:14.401 Message: lib/timer: Defining dependency "timer" 00:04:14.401 Message: lib/compressdev: Defining dependency "compressdev" 00:04:14.401 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:14.401 Message: lib/dmadev: Defining dependency "dmadev" 00:04:14.401 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:14.401 Message: lib/power: Defining dependency "power" 00:04:14.401 Message: lib/reorder: Defining dependency "reorder" 00:04:14.401 Message: lib/security: Defining dependency "security" 00:04:14.401 Has header "linux/userfaultfd.h" : YES 00:04:14.401 Has header "linux/vduse.h" : YES 00:04:14.401 Message: lib/vhost: Defining dependency "vhost" 00:04:14.401 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:14.401 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:14.401 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:14.401 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:14.401 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:14.401 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:14.401 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:14.401 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:14.401 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:14.401 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:14.401 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:14.401 Configuring doxy-api-html.conf using configuration 00:04:14.401 Configuring doxy-api-man.conf using configuration 00:04:14.401 Program mandb found: YES (/usr/bin/mandb) 00:04:14.401 Program sphinx-build found: NO 00:04:14.401 Configuring rte_build_config.h using configuration 00:04:14.401 Message: 00:04:14.401 ================= 00:04:14.401 Applications Enabled 00:04:14.401 ================= 00:04:14.401 00:04:14.401 apps: 00:04:14.401 00:04:14.401 00:04:14.401 Message: 00:04:14.401 ================= 00:04:14.401 Libraries Enabled 00:04:14.401 ================= 00:04:14.401 00:04:14.401 libs: 00:04:14.401 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:14.401 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:14.401 cryptodev, dmadev, power, reorder, security, vhost, 00:04:14.401 00:04:14.401 Message: 00:04:14.402 =============== 00:04:14.402 Drivers Enabled 00:04:14.402 =============== 00:04:14.402 00:04:14.402 common: 00:04:14.402 00:04:14.402 bus: 00:04:14.402 pci, vdev, 00:04:14.402 mempool: 00:04:14.402 ring, 00:04:14.402 dma: 00:04:14.402 00:04:14.402 net: 00:04:14.402 00:04:14.402 crypto: 00:04:14.402 00:04:14.402 compress: 00:04:14.402 00:04:14.402 vdpa: 00:04:14.402 00:04:14.402 00:04:14.402 Message: 00:04:14.402 ================= 00:04:14.402 Content Skipped 00:04:14.402 ================= 00:04:14.402 00:04:14.402 apps: 00:04:14.402 dumpcap: explicitly disabled via build config 00:04:14.402 graph: explicitly disabled via build config 00:04:14.402 pdump: explicitly disabled via build config 00:04:14.402 proc-info: explicitly disabled via build config 00:04:14.402 test-acl: explicitly disabled via build config 00:04:14.402 test-bbdev: explicitly disabled via build config 00:04:14.402 test-cmdline: explicitly disabled via build config 00:04:14.402 test-compress-perf: explicitly disabled via build config 00:04:14.402 test-crypto-perf: explicitly disabled via build config 00:04:14.402 test-dma-perf: explicitly disabled via build config 00:04:14.402 test-eventdev: explicitly disabled via build config 00:04:14.402 test-fib: explicitly disabled via build config 00:04:14.402 test-flow-perf: explicitly disabled via build config 00:04:14.402 test-gpudev: explicitly disabled via build config 00:04:14.402 test-mldev: explicitly disabled via build config 00:04:14.402 test-pipeline: explicitly disabled via build config 00:04:14.402 test-pmd: explicitly disabled via build config 00:04:14.402 test-regex: explicitly disabled via build config 00:04:14.402 test-sad: explicitly disabled via build config 00:04:14.402 test-security-perf: explicitly disabled via build config 00:04:14.402 00:04:14.402 libs: 00:04:14.402 argparse: explicitly disabled via build config 00:04:14.402 metrics: explicitly disabled via build config 00:04:14.402 acl: explicitly disabled via build config 00:04:14.402 bbdev: explicitly disabled via build config 00:04:14.402 bitratestats: explicitly disabled via build config 00:04:14.402 bpf: explicitly disabled via build config 00:04:14.402 cfgfile: explicitly disabled via build config 00:04:14.402 distributor: explicitly disabled via build config 00:04:14.402 efd: explicitly disabled via build config 00:04:14.402 eventdev: explicitly disabled via build config 00:04:14.402 dispatcher: explicitly disabled via build config 00:04:14.402 gpudev: explicitly disabled via build config 00:04:14.402 gro: explicitly disabled via build config 00:04:14.402 gso: explicitly disabled via build config 00:04:14.402 ip_frag: explicitly disabled via build config 00:04:14.402 jobstats: explicitly disabled via build config 00:04:14.402 latencystats: explicitly disabled via build config 00:04:14.402 lpm: explicitly disabled via build config 00:04:14.402 member: explicitly disabled via build config 00:04:14.402 pcapng: explicitly disabled via build config 00:04:14.402 rawdev: explicitly disabled via build config 00:04:14.402 regexdev: explicitly disabled via build config 00:04:14.402 mldev: explicitly disabled via build config 00:04:14.402 rib: explicitly disabled via build config 00:04:14.402 sched: explicitly disabled via build config 00:04:14.402 stack: explicitly disabled via build config 00:04:14.402 ipsec: explicitly disabled via build config 00:04:14.402 pdcp: explicitly disabled via build config 00:04:14.402 fib: explicitly disabled via build config 00:04:14.402 port: explicitly disabled via build config 00:04:14.402 pdump: explicitly disabled via build config 00:04:14.402 table: explicitly disabled via build config 00:04:14.402 pipeline: explicitly disabled via build config 00:04:14.402 graph: explicitly disabled via build config 00:04:14.402 node: explicitly disabled via build config 00:04:14.402 00:04:14.402 drivers: 00:04:14.402 common/cpt: not in enabled drivers build config 00:04:14.402 common/dpaax: not in enabled drivers build config 00:04:14.402 common/iavf: not in enabled drivers build config 00:04:14.402 common/idpf: not in enabled drivers build config 00:04:14.402 common/ionic: not in enabled drivers build config 00:04:14.402 common/mvep: not in enabled drivers build config 00:04:14.402 common/octeontx: not in enabled drivers build config 00:04:14.402 bus/auxiliary: not in enabled drivers build config 00:04:14.402 bus/cdx: not in enabled drivers build config 00:04:14.402 bus/dpaa: not in enabled drivers build config 00:04:14.402 bus/fslmc: not in enabled drivers build config 00:04:14.402 bus/ifpga: not in enabled drivers build config 00:04:14.402 bus/platform: not in enabled drivers build config 00:04:14.402 bus/uacce: not in enabled drivers build config 00:04:14.402 bus/vmbus: not in enabled drivers build config 00:04:14.402 common/cnxk: not in enabled drivers build config 00:04:14.402 common/mlx5: not in enabled drivers build config 00:04:14.402 common/nfp: not in enabled drivers build config 00:04:14.402 common/nitrox: not in enabled drivers build config 00:04:14.402 common/qat: not in enabled drivers build config 00:04:14.402 common/sfc_efx: not in enabled drivers build config 00:04:14.402 mempool/bucket: not in enabled drivers build config 00:04:14.402 mempool/cnxk: not in enabled drivers build config 00:04:14.402 mempool/dpaa: not in enabled drivers build config 00:04:14.402 mempool/dpaa2: not in enabled drivers build config 00:04:14.402 mempool/octeontx: not in enabled drivers build config 00:04:14.402 mempool/stack: not in enabled drivers build config 00:04:14.402 dma/cnxk: not in enabled drivers build config 00:04:14.402 dma/dpaa: not in enabled drivers build config 00:04:14.402 dma/dpaa2: not in enabled drivers build config 00:04:14.402 dma/hisilicon: not in enabled drivers build config 00:04:14.402 dma/idxd: not in enabled drivers build config 00:04:14.402 dma/ioat: not in enabled drivers build config 00:04:14.402 dma/skeleton: not in enabled drivers build config 00:04:14.402 net/af_packet: not in enabled drivers build config 00:04:14.402 net/af_xdp: not in enabled drivers build config 00:04:14.402 net/ark: not in enabled drivers build config 00:04:14.402 net/atlantic: not in enabled drivers build config 00:04:14.402 net/avp: not in enabled drivers build config 00:04:14.402 net/axgbe: not in enabled drivers build config 00:04:14.402 net/bnx2x: not in enabled drivers build config 00:04:14.402 net/bnxt: not in enabled drivers build config 00:04:14.402 net/bonding: not in enabled drivers build config 00:04:14.402 net/cnxk: not in enabled drivers build config 00:04:14.402 net/cpfl: not in enabled drivers build config 00:04:14.402 net/cxgbe: not in enabled drivers build config 00:04:14.402 net/dpaa: not in enabled drivers build config 00:04:14.402 net/dpaa2: not in enabled drivers build config 00:04:14.402 net/e1000: not in enabled drivers build config 00:04:14.402 net/ena: not in enabled drivers build config 00:04:14.403 net/enetc: not in enabled drivers build config 00:04:14.403 net/enetfec: not in enabled drivers build config 00:04:14.403 net/enic: not in enabled drivers build config 00:04:14.403 net/failsafe: not in enabled drivers build config 00:04:14.403 net/fm10k: not in enabled drivers build config 00:04:14.403 net/gve: not in enabled drivers build config 00:04:14.403 net/hinic: not in enabled drivers build config 00:04:14.403 net/hns3: not in enabled drivers build config 00:04:14.403 net/i40e: not in enabled drivers build config 00:04:14.403 net/iavf: not in enabled drivers build config 00:04:14.403 net/ice: not in enabled drivers build config 00:04:14.403 net/idpf: not in enabled drivers build config 00:04:14.403 net/igc: not in enabled drivers build config 00:04:14.403 net/ionic: not in enabled drivers build config 00:04:14.403 net/ipn3ke: not in enabled drivers build config 00:04:14.403 net/ixgbe: not in enabled drivers build config 00:04:14.403 net/mana: not in enabled drivers build config 00:04:14.403 net/memif: not in enabled drivers build config 00:04:14.403 net/mlx4: not in enabled drivers build config 00:04:14.403 net/mlx5: not in enabled drivers build config 00:04:14.403 net/mvneta: not in enabled drivers build config 00:04:14.403 net/mvpp2: not in enabled drivers build config 00:04:14.403 net/netvsc: not in enabled drivers build config 00:04:14.403 net/nfb: not in enabled drivers build config 00:04:14.403 net/nfp: not in enabled drivers build config 00:04:14.403 net/ngbe: not in enabled drivers build config 00:04:14.403 net/null: not in enabled drivers build config 00:04:14.403 net/octeontx: not in enabled drivers build config 00:04:14.403 net/octeon_ep: not in enabled drivers build config 00:04:14.403 net/pcap: not in enabled drivers build config 00:04:14.403 net/pfe: not in enabled drivers build config 00:04:14.403 net/qede: not in enabled drivers build config 00:04:14.403 net/ring: not in enabled drivers build config 00:04:14.403 net/sfc: not in enabled drivers build config 00:04:14.403 net/softnic: not in enabled drivers build config 00:04:14.403 net/tap: not in enabled drivers build config 00:04:14.403 net/thunderx: not in enabled drivers build config 00:04:14.403 net/txgbe: not in enabled drivers build config 00:04:14.403 net/vdev_netvsc: not in enabled drivers build config 00:04:14.403 net/vhost: not in enabled drivers build config 00:04:14.403 net/virtio: not in enabled drivers build config 00:04:14.403 net/vmxnet3: not in enabled drivers build config 00:04:14.403 raw/*: missing internal dependency, "rawdev" 00:04:14.403 crypto/armv8: not in enabled drivers build config 00:04:14.403 crypto/bcmfs: not in enabled drivers build config 00:04:14.403 crypto/caam_jr: not in enabled drivers build config 00:04:14.403 crypto/ccp: not in enabled drivers build config 00:04:14.403 crypto/cnxk: not in enabled drivers build config 00:04:14.403 crypto/dpaa_sec: not in enabled drivers build config 00:04:14.403 crypto/dpaa2_sec: not in enabled drivers build config 00:04:14.403 crypto/ipsec_mb: not in enabled drivers build config 00:04:14.403 crypto/mlx5: not in enabled drivers build config 00:04:14.403 crypto/mvsam: not in enabled drivers build config 00:04:14.403 crypto/nitrox: not in enabled drivers build config 00:04:14.403 crypto/null: not in enabled drivers build config 00:04:14.403 crypto/octeontx: not in enabled drivers build config 00:04:14.403 crypto/openssl: not in enabled drivers build config 00:04:14.403 crypto/scheduler: not in enabled drivers build config 00:04:14.403 crypto/uadk: not in enabled drivers build config 00:04:14.403 crypto/virtio: not in enabled drivers build config 00:04:14.403 compress/isal: not in enabled drivers build config 00:04:14.403 compress/mlx5: not in enabled drivers build config 00:04:14.403 compress/nitrox: not in enabled drivers build config 00:04:14.403 compress/octeontx: not in enabled drivers build config 00:04:14.403 compress/zlib: not in enabled drivers build config 00:04:14.403 regex/*: missing internal dependency, "regexdev" 00:04:14.403 ml/*: missing internal dependency, "mldev" 00:04:14.403 vdpa/ifc: not in enabled drivers build config 00:04:14.403 vdpa/mlx5: not in enabled drivers build config 00:04:14.403 vdpa/nfp: not in enabled drivers build config 00:04:14.403 vdpa/sfc: not in enabled drivers build config 00:04:14.403 event/*: missing internal dependency, "eventdev" 00:04:14.403 baseband/*: missing internal dependency, "bbdev" 00:04:14.403 gpu/*: missing internal dependency, "gpudev" 00:04:14.403 00:04:14.403 00:04:14.403 Build targets in project: 85 00:04:14.403 00:04:14.403 DPDK 24.03.0 00:04:14.403 00:04:14.403 User defined options 00:04:14.403 buildtype : debug 00:04:14.403 default_library : shared 00:04:14.403 libdir : lib 00:04:14.403 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:14.403 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:14.403 c_link_args : 00:04:14.403 cpu_instruction_set: native 00:04:14.403 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:04:14.403 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:04:14.403 enable_docs : false 00:04:14.403 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:14.403 enable_kmods : false 00:04:14.403 max_lcores : 128 00:04:14.403 tests : false 00:04:14.403 00:04:14.403 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:14.403 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:04:14.403 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:14.403 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:14.403 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:14.403 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:14.403 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:14.403 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:14.665 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:14.665 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:14.665 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:14.665 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:14.665 [11/268] Linking static target lib/librte_kvargs.a 00:04:14.665 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:14.665 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:14.665 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:14.665 [15/268] Linking static target lib/librte_log.a 00:04:14.665 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:15.237 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.237 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:15.237 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:15.499 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:15.499 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:15.499 [22/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:15.499 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:15.499 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:15.499 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:15.499 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:15.499 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:15.499 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:15.499 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:15.499 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:15.499 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:15.499 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:15.499 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:15.499 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:15.499 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:15.499 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:15.499 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:15.499 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:15.499 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:15.499 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:15.499 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:15.499 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:15.499 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:15.499 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:15.499 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:15.499 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:15.499 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:15.499 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:15.499 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:15.499 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:15.499 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:15.499 [52/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:15.499 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:15.499 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:15.499 [55/268] Linking static target lib/librte_telemetry.a 00:04:15.499 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:15.499 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:15.499 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:15.759 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:15.759 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:15.759 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:15.759 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:15.759 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:15.759 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:15.759 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:16.020 [66/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.020 [67/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:16.020 [68/268] Linking static target lib/librte_pci.a 00:04:16.020 [69/268] Linking target lib/librte_log.so.24.1 00:04:16.280 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:16.280 [71/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:16.280 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:16.280 [73/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:16.280 [74/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:16.280 [75/268] Linking target lib/librte_kvargs.so.24.1 00:04:16.280 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:16.280 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:16.280 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:16.280 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:16.280 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:16.280 [81/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:16.280 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:16.280 [83/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:16.543 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:16.543 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:16.543 [86/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:16.543 [87/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.543 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:16.543 [89/268] Linking static target lib/librte_meter.a 00:04:16.543 [90/268] Linking static target lib/librte_ring.a 00:04:16.543 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:16.543 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:16.543 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:16.543 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:16.543 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:16.543 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:16.543 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:16.543 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:16.543 [99/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.543 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:16.543 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:16.543 [102/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:16.543 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:16.543 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:16.543 [105/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:16.543 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:16.543 [107/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:16.543 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:16.543 [109/268] Linking static target lib/librte_eal.a 00:04:16.543 [110/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:16.543 [111/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:16.543 [112/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:16.543 [113/268] Linking target lib/librte_telemetry.so.24.1 00:04:16.543 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:16.543 [115/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:16.543 [116/268] Linking static target lib/librte_rcu.a 00:04:16.802 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:16.802 [118/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:16.802 [119/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:16.802 [120/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:16.802 [121/268] Linking static target lib/librte_mempool.a 00:04:16.802 [122/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:16.802 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:16.802 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:16.802 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:16.802 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:16.802 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:16.802 [128/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:16.802 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:16.802 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:16.802 [131/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:17.061 [132/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.061 [133/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:17.061 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:17.061 [135/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.061 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:17.061 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:17.061 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:17.061 [139/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:17.061 [140/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:17.061 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:17.325 [142/268] Linking static target lib/librte_net.a 00:04:17.325 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:17.325 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:17.325 [145/268] Linking static target lib/librte_cmdline.a 00:04:17.586 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:17.586 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:17.586 [148/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.586 [149/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:17.586 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:17.586 [151/268] Linking static target lib/librte_timer.a 00:04:17.586 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:17.586 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:17.586 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:17.586 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:17.586 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:17.586 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:17.586 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:17.586 [159/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.586 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:17.845 [161/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:17.845 [162/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:17.845 [163/268] Linking static target lib/librte_dmadev.a 00:04:17.845 [164/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:17.845 [165/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:17.845 [166/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:17.845 [167/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:17.845 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:17.845 [169/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.845 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:17.845 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:17.845 [172/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:17.845 [173/268] Linking static target lib/librte_power.a 00:04:17.845 [174/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:18.104 [175/268] Linking static target lib/librte_compressdev.a 00:04:18.104 [176/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.104 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:18.104 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:18.104 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:18.104 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:18.104 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:18.104 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:18.104 [183/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:18.104 [184/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:18.104 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:18.104 [186/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:18.104 [187/268] Linking static target lib/librte_reorder.a 00:04:18.104 [188/268] Linking static target lib/librte_hash.a 00:04:18.104 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:18.104 [190/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:18.104 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:18.362 [192/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:18.362 [193/268] Linking static target lib/librte_mbuf.a 00:04:18.362 [194/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.363 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:18.363 [196/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:18.363 [197/268] Linking static target lib/librte_security.a 00:04:18.363 [198/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:18.363 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:18.363 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:18.363 [201/268] Linking static target drivers/librte_bus_vdev.a 00:04:18.363 [202/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.363 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:18.363 [204/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:18.363 [205/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:18.363 [206/268] Linking static target drivers/librte_bus_pci.a 00:04:18.363 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:18.363 [208/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:18.363 [209/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:18.363 [210/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:18.363 [211/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.363 [212/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.621 [213/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.621 [214/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.621 [215/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.621 [216/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:18.621 [217/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.621 [218/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:18.621 [219/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:18.621 [220/268] Linking static target drivers/librte_mempool_ring.a 00:04:18.621 [221/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.880 [222/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.880 [223/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:18.880 [224/268] Linking static target lib/librte_cryptodev.a 00:04:18.880 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:18.880 [226/268] Linking static target lib/librte_ethdev.a 00:04:19.866 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.766 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:23.142 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:23.142 [230/268] Linking target lib/librte_eal.so.24.1 00:04:23.142 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:23.400 [232/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:23.400 [233/268] Linking target lib/librte_ring.so.24.1 00:04:23.400 [234/268] Linking target lib/librte_meter.so.24.1 00:04:23.400 [235/268] Linking target lib/librte_pci.so.24.1 00:04:23.400 [236/268] Linking target lib/librte_timer.so.24.1 00:04:23.400 [237/268] Linking target lib/librte_dmadev.so.24.1 00:04:23.400 [238/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:23.400 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:23.400 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:23.400 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:23.400 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:23.400 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:23.400 [244/268] Linking target lib/librte_rcu.so.24.1 00:04:23.400 [245/268] Linking target lib/librte_mempool.so.24.1 00:04:23.400 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:23.657 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:23.657 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:23.657 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:23.657 [250/268] Linking target lib/librte_mbuf.so.24.1 00:04:23.657 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:23.657 [252/268] Linking target lib/librte_reorder.so.24.1 00:04:23.914 [253/268] Linking target lib/librte_compressdev.so.24.1 00:04:23.914 [254/268] Linking target lib/librte_net.so.24.1 00:04:23.914 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:04:23.914 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:23.914 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:23.914 [258/268] Linking target lib/librte_security.so.24.1 00:04:23.914 [259/268] Linking target lib/librte_hash.so.24.1 00:04:23.914 [260/268] Linking target lib/librte_cmdline.so.24.1 00:04:23.914 [261/268] Linking target lib/librte_ethdev.so.24.1 00:04:24.173 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:24.173 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:24.173 [264/268] Linking target lib/librte_power.so.24.1 00:04:27.458 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:27.458 [266/268] Linking static target lib/librte_vhost.a 00:04:28.024 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.024 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:28.024 INFO: autodetecting backend as ninja 00:04:28.024 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:04:49.953 CC lib/ut_mock/mock.o 00:04:49.953 CC lib/log/log.o 00:04:49.953 CC lib/log/log_flags.o 00:04:49.953 CC lib/log/log_deprecated.o 00:04:49.953 CC lib/ut/ut.o 00:04:49.953 LIB libspdk_ut.a 00:04:49.953 LIB libspdk_ut_mock.a 00:04:49.953 LIB libspdk_log.a 00:04:49.953 SO libspdk_ut_mock.so.6.0 00:04:49.953 SO libspdk_ut.so.2.0 00:04:49.953 SO libspdk_log.so.7.1 00:04:49.953 SYMLINK libspdk_ut.so 00:04:49.953 SYMLINK libspdk_ut_mock.so 00:04:49.953 SYMLINK libspdk_log.so 00:04:49.953 CXX lib/trace_parser/trace.o 00:04:49.953 CC lib/dma/dma.o 00:04:49.953 CC lib/util/base64.o 00:04:49.953 CC lib/util/bit_array.o 00:04:49.953 CC lib/util/cpuset.o 00:04:49.953 CC lib/ioat/ioat.o 00:04:49.953 CC lib/util/crc16.o 00:04:49.953 CC lib/util/crc32.o 00:04:49.953 CC lib/util/crc32c.o 00:04:49.953 CC lib/util/crc32_ieee.o 00:04:49.953 CC lib/util/crc64.o 00:04:49.953 CC lib/util/dif.o 00:04:49.953 CC lib/util/fd.o 00:04:49.953 CC lib/util/fd_group.o 00:04:49.953 CC lib/util/file.o 00:04:49.953 CC lib/util/hexlify.o 00:04:49.953 CC lib/util/iov.o 00:04:49.953 CC lib/util/math.o 00:04:49.953 CC lib/util/net.o 00:04:49.953 CC lib/util/pipe.o 00:04:49.953 CC lib/util/strerror_tls.o 00:04:49.953 CC lib/util/string.o 00:04:49.953 CC lib/util/uuid.o 00:04:49.953 CC lib/util/xor.o 00:04:49.953 CC lib/util/zipf.o 00:04:49.953 CC lib/util/md5.o 00:04:49.953 CC lib/vfio_user/host/vfio_user_pci.o 00:04:49.953 CC lib/vfio_user/host/vfio_user.o 00:04:49.953 LIB libspdk_dma.a 00:04:49.953 SO libspdk_dma.so.5.0 00:04:49.953 SYMLINK libspdk_dma.so 00:04:49.953 LIB libspdk_ioat.a 00:04:49.953 LIB libspdk_vfio_user.a 00:04:49.953 SO libspdk_ioat.so.7.0 00:04:49.953 SO libspdk_vfio_user.so.5.0 00:04:49.953 SYMLINK libspdk_ioat.so 00:04:49.953 SYMLINK libspdk_vfio_user.so 00:04:49.953 LIB libspdk_util.a 00:04:49.953 SO libspdk_util.so.10.1 00:04:49.953 SYMLINK libspdk_util.so 00:04:49.953 CC lib/conf/conf.o 00:04:49.953 CC lib/json/json_parse.o 00:04:49.953 CC lib/vmd/vmd.o 00:04:49.953 CC lib/rdma_utils/rdma_utils.o 00:04:49.953 CC lib/idxd/idxd.o 00:04:49.953 CC lib/env_dpdk/env.o 00:04:49.953 CC lib/vmd/led.o 00:04:49.953 CC lib/idxd/idxd_user.o 00:04:49.953 CC lib/json/json_util.o 00:04:49.953 CC lib/env_dpdk/memory.o 00:04:49.953 CC lib/idxd/idxd_kernel.o 00:04:49.953 CC lib/json/json_write.o 00:04:49.953 CC lib/env_dpdk/pci.o 00:04:49.953 CC lib/env_dpdk/init.o 00:04:49.953 CC lib/env_dpdk/threads.o 00:04:49.953 CC lib/env_dpdk/pci_ioat.o 00:04:49.953 CC lib/env_dpdk/pci_virtio.o 00:04:49.953 CC lib/env_dpdk/pci_vmd.o 00:04:49.953 CC lib/env_dpdk/pci_idxd.o 00:04:49.953 CC lib/env_dpdk/sigbus_handler.o 00:04:49.953 CC lib/env_dpdk/pci_event.o 00:04:49.953 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:49.954 CC lib/env_dpdk/pci_dpdk.o 00:04:49.954 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:49.954 LIB libspdk_trace_parser.a 00:04:49.954 SO libspdk_trace_parser.so.6.0 00:04:49.954 SYMLINK libspdk_trace_parser.so 00:04:49.954 LIB libspdk_conf.a 00:04:49.954 SO libspdk_conf.so.6.0 00:04:49.954 LIB libspdk_rdma_utils.a 00:04:49.954 SO libspdk_rdma_utils.so.1.0 00:04:49.954 LIB libspdk_json.a 00:04:49.954 SYMLINK libspdk_conf.so 00:04:49.954 SO libspdk_json.so.6.0 00:04:49.954 SYMLINK libspdk_rdma_utils.so 00:04:49.954 SYMLINK libspdk_json.so 00:04:49.954 CC lib/rdma_provider/common.o 00:04:49.954 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:49.954 CC lib/jsonrpc/jsonrpc_server.o 00:04:49.954 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:49.954 CC lib/jsonrpc/jsonrpc_client.o 00:04:49.954 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:49.954 LIB libspdk_idxd.a 00:04:49.954 SO libspdk_idxd.so.12.1 00:04:49.954 LIB libspdk_vmd.a 00:04:49.954 SO libspdk_vmd.so.6.0 00:04:49.954 SYMLINK libspdk_idxd.so 00:04:49.954 SYMLINK libspdk_vmd.so 00:04:49.954 LIB libspdk_rdma_provider.a 00:04:49.954 SO libspdk_rdma_provider.so.7.0 00:04:49.954 LIB libspdk_jsonrpc.a 00:04:49.954 SYMLINK libspdk_rdma_provider.so 00:04:49.954 SO libspdk_jsonrpc.so.6.0 00:04:49.954 SYMLINK libspdk_jsonrpc.so 00:04:49.954 CC lib/rpc/rpc.o 00:04:49.954 LIB libspdk_rpc.a 00:04:49.954 SO libspdk_rpc.so.6.0 00:04:49.954 SYMLINK libspdk_rpc.so 00:04:49.954 CC lib/keyring/keyring.o 00:04:49.954 CC lib/keyring/keyring_rpc.o 00:04:49.954 CC lib/notify/notify.o 00:04:49.954 CC lib/trace/trace.o 00:04:49.954 CC lib/notify/notify_rpc.o 00:04:49.954 CC lib/trace/trace_flags.o 00:04:49.954 CC lib/trace/trace_rpc.o 00:04:49.954 LIB libspdk_notify.a 00:04:49.954 SO libspdk_notify.so.6.0 00:04:49.954 SYMLINK libspdk_notify.so 00:04:49.954 LIB libspdk_keyring.a 00:04:50.211 SO libspdk_keyring.so.2.0 00:04:50.211 LIB libspdk_trace.a 00:04:50.211 SO libspdk_trace.so.11.0 00:04:50.211 SYMLINK libspdk_keyring.so 00:04:50.211 SYMLINK libspdk_trace.so 00:04:50.211 CC lib/thread/thread.o 00:04:50.211 CC lib/thread/iobuf.o 00:04:50.211 LIB libspdk_env_dpdk.a 00:04:50.211 CC lib/sock/sock.o 00:04:50.469 CC lib/sock/sock_rpc.o 00:04:50.469 SO libspdk_env_dpdk.so.15.1 00:04:50.469 SYMLINK libspdk_env_dpdk.so 00:04:50.727 LIB libspdk_sock.a 00:04:50.727 SO libspdk_sock.so.10.0 00:04:50.727 SYMLINK libspdk_sock.so 00:04:50.986 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:50.986 CC lib/nvme/nvme_ctrlr.o 00:04:50.986 CC lib/nvme/nvme_fabric.o 00:04:50.986 CC lib/nvme/nvme_ns_cmd.o 00:04:50.986 CC lib/nvme/nvme_ns.o 00:04:50.986 CC lib/nvme/nvme_pcie_common.o 00:04:50.986 CC lib/nvme/nvme_pcie.o 00:04:50.986 CC lib/nvme/nvme_qpair.o 00:04:50.986 CC lib/nvme/nvme.o 00:04:50.986 CC lib/nvme/nvme_quirks.o 00:04:50.986 CC lib/nvme/nvme_transport.o 00:04:50.986 CC lib/nvme/nvme_discovery.o 00:04:50.986 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:50.986 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:50.986 CC lib/nvme/nvme_tcp.o 00:04:50.986 CC lib/nvme/nvme_opal.o 00:04:50.986 CC lib/nvme/nvme_io_msg.o 00:04:50.986 CC lib/nvme/nvme_poll_group.o 00:04:50.986 CC lib/nvme/nvme_zns.o 00:04:50.986 CC lib/nvme/nvme_stubs.o 00:04:50.986 CC lib/nvme/nvme_auth.o 00:04:50.986 CC lib/nvme/nvme_cuse.o 00:04:50.986 CC lib/nvme/nvme_vfio_user.o 00:04:50.986 CC lib/nvme/nvme_rdma.o 00:04:51.921 LIB libspdk_thread.a 00:04:51.921 SO libspdk_thread.so.11.0 00:04:52.179 SYMLINK libspdk_thread.so 00:04:52.179 CC lib/virtio/virtio.o 00:04:52.179 CC lib/init/json_config.o 00:04:52.179 CC lib/fsdev/fsdev.o 00:04:52.179 CC lib/virtio/virtio_vhost_user.o 00:04:52.179 CC lib/accel/accel.o 00:04:52.179 CC lib/init/subsystem.o 00:04:52.179 CC lib/blob/blobstore.o 00:04:52.179 CC lib/vfu_tgt/tgt_endpoint.o 00:04:52.179 CC lib/fsdev/fsdev_io.o 00:04:52.179 CC lib/accel/accel_rpc.o 00:04:52.179 CC lib/init/subsystem_rpc.o 00:04:52.179 CC lib/virtio/virtio_vfio_user.o 00:04:52.179 CC lib/fsdev/fsdev_rpc.o 00:04:52.179 CC lib/vfu_tgt/tgt_rpc.o 00:04:52.179 CC lib/accel/accel_sw.o 00:04:52.179 CC lib/blob/request.o 00:04:52.179 CC lib/blob/zeroes.o 00:04:52.179 CC lib/virtio/virtio_pci.o 00:04:52.179 CC lib/init/rpc.o 00:04:52.179 CC lib/blob/blob_bs_dev.o 00:04:52.437 LIB libspdk_init.a 00:04:52.437 SO libspdk_init.so.6.0 00:04:52.697 SYMLINK libspdk_init.so 00:04:52.697 LIB libspdk_vfu_tgt.a 00:04:52.697 LIB libspdk_virtio.a 00:04:52.697 SO libspdk_vfu_tgt.so.3.0 00:04:52.697 SO libspdk_virtio.so.7.0 00:04:52.697 SYMLINK libspdk_vfu_tgt.so 00:04:52.697 SYMLINK libspdk_virtio.so 00:04:52.697 CC lib/event/app.o 00:04:52.697 CC lib/event/reactor.o 00:04:52.697 CC lib/event/log_rpc.o 00:04:52.697 CC lib/event/app_rpc.o 00:04:52.697 CC lib/event/scheduler_static.o 00:04:52.955 LIB libspdk_fsdev.a 00:04:52.955 SO libspdk_fsdev.so.2.0 00:04:52.955 SYMLINK libspdk_fsdev.so 00:04:53.213 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:53.213 LIB libspdk_event.a 00:04:53.213 SO libspdk_event.so.14.0 00:04:53.213 SYMLINK libspdk_event.so 00:04:53.472 LIB libspdk_accel.a 00:04:53.472 SO libspdk_accel.so.16.0 00:04:53.472 LIB libspdk_nvme.a 00:04:53.472 SYMLINK libspdk_accel.so 00:04:53.730 SO libspdk_nvme.so.15.0 00:04:53.730 CC lib/bdev/bdev.o 00:04:53.730 CC lib/bdev/bdev_rpc.o 00:04:53.730 CC lib/bdev/bdev_zone.o 00:04:53.730 CC lib/bdev/part.o 00:04:53.730 CC lib/bdev/scsi_nvme.o 00:04:53.730 SYMLINK libspdk_nvme.so 00:04:53.730 LIB libspdk_fuse_dispatcher.a 00:04:53.989 SO libspdk_fuse_dispatcher.so.1.0 00:04:53.989 SYMLINK libspdk_fuse_dispatcher.so 00:04:55.364 LIB libspdk_blob.a 00:04:55.364 SO libspdk_blob.so.11.0 00:04:55.364 SYMLINK libspdk_blob.so 00:04:55.623 CC lib/blobfs/blobfs.o 00:04:55.623 CC lib/blobfs/tree.o 00:04:55.623 CC lib/lvol/lvol.o 00:04:56.558 LIB libspdk_bdev.a 00:04:56.558 LIB libspdk_blobfs.a 00:04:56.558 SO libspdk_blobfs.so.10.0 00:04:56.558 SO libspdk_bdev.so.17.0 00:04:56.558 SYMLINK libspdk_blobfs.so 00:04:56.558 LIB libspdk_lvol.a 00:04:56.558 SYMLINK libspdk_bdev.so 00:04:56.558 SO libspdk_lvol.so.10.0 00:04:56.558 SYMLINK libspdk_lvol.so 00:04:56.558 CC lib/scsi/dev.o 00:04:56.558 CC lib/ublk/ublk.o 00:04:56.558 CC lib/nvmf/ctrlr.o 00:04:56.558 CC lib/nbd/nbd.o 00:04:56.558 CC lib/ublk/ublk_rpc.o 00:04:56.558 CC lib/scsi/lun.o 00:04:56.558 CC lib/nbd/nbd_rpc.o 00:04:56.558 CC lib/nvmf/ctrlr_discovery.o 00:04:56.822 CC lib/ftl/ftl_core.o 00:04:56.822 CC lib/scsi/port.o 00:04:56.822 CC lib/nvmf/ctrlr_bdev.o 00:04:56.822 CC lib/ftl/ftl_init.o 00:04:56.822 CC lib/scsi/scsi.o 00:04:56.822 CC lib/nvmf/subsystem.o 00:04:56.822 CC lib/scsi/scsi_bdev.o 00:04:56.822 CC lib/nvmf/nvmf.o 00:04:56.822 CC lib/ftl/ftl_debug.o 00:04:56.822 CC lib/scsi/scsi_pr.o 00:04:56.822 CC lib/ftl/ftl_layout.o 00:04:56.822 CC lib/scsi/scsi_rpc.o 00:04:56.822 CC lib/ftl/ftl_io.o 00:04:56.822 CC lib/nvmf/nvmf_rpc.o 00:04:56.822 CC lib/scsi/task.o 00:04:56.822 CC lib/ftl/ftl_sb.o 00:04:56.822 CC lib/nvmf/transport.o 00:04:56.822 CC lib/nvmf/tcp.o 00:04:56.822 CC lib/ftl/ftl_l2p.o 00:04:56.822 CC lib/nvmf/stubs.o 00:04:56.822 CC lib/ftl/ftl_l2p_flat.o 00:04:56.822 CC lib/nvmf/mdns_server.o 00:04:56.822 CC lib/ftl/ftl_nv_cache.o 00:04:56.822 CC lib/nvmf/vfio_user.o 00:04:56.822 CC lib/ftl/ftl_band.o 00:04:56.822 CC lib/ftl/ftl_band_ops.o 00:04:56.822 CC lib/nvmf/rdma.o 00:04:56.822 CC lib/ftl/ftl_writer.o 00:04:56.822 CC lib/nvmf/auth.o 00:04:56.822 CC lib/ftl/ftl_rq.o 00:04:56.822 CC lib/ftl/ftl_reloc.o 00:04:56.822 CC lib/ftl/ftl_l2p_cache.o 00:04:56.822 CC lib/ftl/ftl_p2l.o 00:04:56.822 CC lib/ftl/ftl_p2l_log.o 00:04:56.822 CC lib/ftl/mngt/ftl_mngt.o 00:04:56.822 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:56.822 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:56.822 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:56.822 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:56.822 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:57.082 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:57.082 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:57.082 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:57.082 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:57.082 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:57.082 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:57.082 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:57.082 CC lib/ftl/utils/ftl_conf.o 00:04:57.082 CC lib/ftl/utils/ftl_md.o 00:04:57.082 CC lib/ftl/utils/ftl_mempool.o 00:04:57.082 CC lib/ftl/utils/ftl_bitmap.o 00:04:57.082 CC lib/ftl/utils/ftl_property.o 00:04:57.082 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:57.082 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:57.082 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:57.082 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:57.347 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:57.347 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:57.347 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:57.347 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:57.347 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:57.347 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:57.347 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:57.347 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:57.347 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:57.347 CC lib/ftl/base/ftl_base_dev.o 00:04:57.347 CC lib/ftl/base/ftl_base_bdev.o 00:04:57.606 CC lib/ftl/ftl_trace.o 00:04:57.606 LIB libspdk_nbd.a 00:04:57.606 SO libspdk_nbd.so.7.0 00:04:57.606 SYMLINK libspdk_nbd.so 00:04:57.606 LIB libspdk_scsi.a 00:04:57.606 SO libspdk_scsi.so.9.0 00:04:57.864 SYMLINK libspdk_scsi.so 00:04:57.864 LIB libspdk_ublk.a 00:04:57.864 SO libspdk_ublk.so.3.0 00:04:57.864 CC lib/vhost/vhost.o 00:04:57.864 CC lib/vhost/vhost_rpc.o 00:04:57.864 CC lib/vhost/vhost_scsi.o 00:04:57.864 CC lib/vhost/vhost_blk.o 00:04:57.864 CC lib/vhost/rte_vhost_user.o 00:04:57.864 CC lib/iscsi/conn.o 00:04:57.865 CC lib/iscsi/init_grp.o 00:04:57.865 CC lib/iscsi/iscsi.o 00:04:57.865 CC lib/iscsi/param.o 00:04:57.865 CC lib/iscsi/portal_grp.o 00:04:57.865 CC lib/iscsi/tgt_node.o 00:04:57.865 CC lib/iscsi/iscsi_subsystem.o 00:04:57.865 CC lib/iscsi/iscsi_rpc.o 00:04:57.865 CC lib/iscsi/task.o 00:04:57.865 SYMLINK libspdk_ublk.so 00:04:58.123 LIB libspdk_ftl.a 00:04:58.381 SO libspdk_ftl.so.9.0 00:04:58.639 SYMLINK libspdk_ftl.so 00:04:59.206 LIB libspdk_vhost.a 00:04:59.206 SO libspdk_vhost.so.8.0 00:04:59.206 SYMLINK libspdk_vhost.so 00:04:59.464 LIB libspdk_nvmf.a 00:04:59.464 LIB libspdk_iscsi.a 00:04:59.464 SO libspdk_iscsi.so.8.0 00:04:59.464 SO libspdk_nvmf.so.20.0 00:04:59.464 SYMLINK libspdk_iscsi.so 00:04:59.723 SYMLINK libspdk_nvmf.so 00:04:59.981 CC module/env_dpdk/env_dpdk_rpc.o 00:04:59.981 CC module/vfu_device/vfu_virtio.o 00:04:59.981 CC module/vfu_device/vfu_virtio_blk.o 00:04:59.981 CC module/vfu_device/vfu_virtio_scsi.o 00:04:59.981 CC module/vfu_device/vfu_virtio_rpc.o 00:04:59.981 CC module/vfu_device/vfu_virtio_fs.o 00:04:59.981 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:59.981 CC module/accel/ioat/accel_ioat.o 00:04:59.981 CC module/keyring/file/keyring.o 00:04:59.981 CC module/accel/iaa/accel_iaa.o 00:04:59.981 CC module/keyring/file/keyring_rpc.o 00:04:59.981 CC module/fsdev/aio/fsdev_aio.o 00:04:59.981 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:59.981 CC module/accel/iaa/accel_iaa_rpc.o 00:04:59.981 CC module/blob/bdev/blob_bdev.o 00:04:59.981 CC module/accel/ioat/accel_ioat_rpc.o 00:04:59.981 CC module/sock/posix/posix.o 00:04:59.981 CC module/fsdev/aio/linux_aio_mgr.o 00:04:59.981 CC module/keyring/linux/keyring.o 00:04:59.981 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:59.981 CC module/keyring/linux/keyring_rpc.o 00:04:59.981 CC module/accel/error/accel_error_rpc.o 00:04:59.981 CC module/accel/error/accel_error.o 00:04:59.981 CC module/accel/dsa/accel_dsa_rpc.o 00:04:59.981 CC module/accel/dsa/accel_dsa.o 00:04:59.981 CC module/scheduler/gscheduler/gscheduler.o 00:05:00.240 LIB libspdk_env_dpdk_rpc.a 00:05:00.240 SO libspdk_env_dpdk_rpc.so.6.0 00:05:00.240 LIB libspdk_scheduler_gscheduler.a 00:05:00.240 LIB libspdk_scheduler_dpdk_governor.a 00:05:00.240 LIB libspdk_keyring_linux.a 00:05:00.240 SYMLINK libspdk_env_dpdk_rpc.so 00:05:00.240 SO libspdk_scheduler_gscheduler.so.4.0 00:05:00.240 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:00.240 SO libspdk_keyring_linux.so.1.0 00:05:00.240 LIB libspdk_accel_ioat.a 00:05:00.240 LIB libspdk_accel_iaa.a 00:05:00.240 LIB libspdk_scheduler_dynamic.a 00:05:00.240 LIB libspdk_keyring_file.a 00:05:00.240 LIB libspdk_accel_error.a 00:05:00.240 SO libspdk_accel_ioat.so.6.0 00:05:00.240 SO libspdk_scheduler_dynamic.so.4.0 00:05:00.240 SO libspdk_accel_iaa.so.3.0 00:05:00.240 SO libspdk_keyring_file.so.2.0 00:05:00.240 SYMLINK libspdk_scheduler_gscheduler.so 00:05:00.240 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:00.240 SO libspdk_accel_error.so.2.0 00:05:00.240 SYMLINK libspdk_keyring_linux.so 00:05:00.240 SYMLINK libspdk_accel_ioat.so 00:05:00.240 SYMLINK libspdk_scheduler_dynamic.so 00:05:00.240 SYMLINK libspdk_keyring_file.so 00:05:00.240 LIB libspdk_blob_bdev.a 00:05:00.240 SYMLINK libspdk_accel_iaa.so 00:05:00.240 SYMLINK libspdk_accel_error.so 00:05:00.240 SO libspdk_blob_bdev.so.11.0 00:05:00.498 LIB libspdk_accel_dsa.a 00:05:00.498 SO libspdk_accel_dsa.so.5.0 00:05:00.498 SYMLINK libspdk_blob_bdev.so 00:05:00.498 SYMLINK libspdk_accel_dsa.so 00:05:00.498 LIB libspdk_vfu_device.a 00:05:00.756 SO libspdk_vfu_device.so.3.0 00:05:00.756 CC module/bdev/gpt/gpt.o 00:05:00.756 CC module/bdev/null/bdev_null.o 00:05:00.756 CC module/blobfs/bdev/blobfs_bdev.o 00:05:00.756 CC module/bdev/gpt/vbdev_gpt.o 00:05:00.756 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:00.756 CC module/bdev/null/bdev_null_rpc.o 00:05:00.756 CC module/bdev/aio/bdev_aio.o 00:05:00.756 CC module/bdev/aio/bdev_aio_rpc.o 00:05:00.757 CC module/bdev/nvme/bdev_nvme.o 00:05:00.757 CC module/bdev/lvol/vbdev_lvol.o 00:05:00.757 CC module/bdev/passthru/vbdev_passthru.o 00:05:00.757 CC module/bdev/raid/bdev_raid.o 00:05:00.757 CC module/bdev/malloc/bdev_malloc.o 00:05:00.757 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:00.757 CC module/bdev/error/vbdev_error.o 00:05:00.757 CC module/bdev/error/vbdev_error_rpc.o 00:05:00.757 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:00.757 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:00.757 CC module/bdev/raid/bdev_raid_rpc.o 00:05:00.757 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:00.757 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:00.757 CC module/bdev/raid/bdev_raid_sb.o 00:05:00.757 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:00.757 CC module/bdev/delay/vbdev_delay.o 00:05:00.757 CC module/bdev/iscsi/bdev_iscsi.o 00:05:00.757 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:00.757 CC module/bdev/raid/raid0.o 00:05:00.757 CC module/bdev/nvme/nvme_rpc.o 00:05:00.757 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:00.757 CC module/bdev/raid/raid1.o 00:05:00.757 CC module/bdev/split/vbdev_split.o 00:05:00.757 CC module/bdev/nvme/bdev_mdns_client.o 00:05:00.757 CC module/bdev/nvme/vbdev_opal.o 00:05:00.757 CC module/bdev/raid/concat.o 00:05:00.757 CC module/bdev/ftl/bdev_ftl.o 00:05:00.757 CC module/bdev/split/vbdev_split_rpc.o 00:05:00.757 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:00.757 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:00.757 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:00.757 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:00.757 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:00.757 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:00.757 SYMLINK libspdk_vfu_device.so 00:05:00.757 LIB libspdk_fsdev_aio.a 00:05:01.015 SO libspdk_fsdev_aio.so.1.0 00:05:01.015 SYMLINK libspdk_fsdev_aio.so 00:05:01.015 LIB libspdk_sock_posix.a 00:05:01.015 SO libspdk_sock_posix.so.6.0 00:05:01.015 LIB libspdk_blobfs_bdev.a 00:05:01.015 SO libspdk_blobfs_bdev.so.6.0 00:05:01.015 SYMLINK libspdk_sock_posix.so 00:05:01.015 LIB libspdk_bdev_split.a 00:05:01.015 SYMLINK libspdk_blobfs_bdev.so 00:05:01.273 LIB libspdk_bdev_null.a 00:05:01.273 SO libspdk_bdev_split.so.6.0 00:05:01.273 SO libspdk_bdev_null.so.6.0 00:05:01.273 LIB libspdk_bdev_passthru.a 00:05:01.273 LIB libspdk_bdev_error.a 00:05:01.273 LIB libspdk_bdev_gpt.a 00:05:01.273 SO libspdk_bdev_error.so.6.0 00:05:01.273 SO libspdk_bdev_passthru.so.6.0 00:05:01.273 SO libspdk_bdev_gpt.so.6.0 00:05:01.273 SYMLINK libspdk_bdev_split.so 00:05:01.273 LIB libspdk_bdev_ftl.a 00:05:01.273 LIB libspdk_bdev_malloc.a 00:05:01.273 SYMLINK libspdk_bdev_null.so 00:05:01.273 LIB libspdk_bdev_aio.a 00:05:01.273 SO libspdk_bdev_ftl.so.6.0 00:05:01.273 LIB libspdk_bdev_delay.a 00:05:01.273 SO libspdk_bdev_malloc.so.6.0 00:05:01.273 SYMLINK libspdk_bdev_gpt.so 00:05:01.273 SYMLINK libspdk_bdev_passthru.so 00:05:01.273 SYMLINK libspdk_bdev_error.so 00:05:01.273 SO libspdk_bdev_aio.so.6.0 00:05:01.273 SO libspdk_bdev_delay.so.6.0 00:05:01.273 LIB libspdk_bdev_iscsi.a 00:05:01.273 LIB libspdk_bdev_zone_block.a 00:05:01.273 SYMLINK libspdk_bdev_ftl.so 00:05:01.273 SO libspdk_bdev_iscsi.so.6.0 00:05:01.273 SYMLINK libspdk_bdev_malloc.so 00:05:01.273 SYMLINK libspdk_bdev_aio.so 00:05:01.273 SO libspdk_bdev_zone_block.so.6.0 00:05:01.273 SYMLINK libspdk_bdev_delay.so 00:05:01.273 SYMLINK libspdk_bdev_iscsi.so 00:05:01.273 LIB libspdk_bdev_lvol.a 00:05:01.273 SYMLINK libspdk_bdev_zone_block.so 00:05:01.532 SO libspdk_bdev_lvol.so.6.0 00:05:01.532 SYMLINK libspdk_bdev_lvol.so 00:05:01.532 LIB libspdk_bdev_virtio.a 00:05:01.532 SO libspdk_bdev_virtio.so.6.0 00:05:01.533 SYMLINK libspdk_bdev_virtio.so 00:05:02.108 LIB libspdk_bdev_raid.a 00:05:02.108 SO libspdk_bdev_raid.so.6.0 00:05:02.108 SYMLINK libspdk_bdev_raid.so 00:05:03.511 LIB libspdk_bdev_nvme.a 00:05:03.511 SO libspdk_bdev_nvme.so.7.1 00:05:03.511 SYMLINK libspdk_bdev_nvme.so 00:05:04.078 CC module/event/subsystems/iobuf/iobuf.o 00:05:04.078 CC module/event/subsystems/vmd/vmd.o 00:05:04.078 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:04.078 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:04.078 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:04.078 CC module/event/subsystems/scheduler/scheduler.o 00:05:04.078 CC module/event/subsystems/fsdev/fsdev.o 00:05:04.078 CC module/event/subsystems/sock/sock.o 00:05:04.078 CC module/event/subsystems/keyring/keyring.o 00:05:04.078 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:04.078 LIB libspdk_event_keyring.a 00:05:04.078 LIB libspdk_event_fsdev.a 00:05:04.078 LIB libspdk_event_vhost_blk.a 00:05:04.078 LIB libspdk_event_scheduler.a 00:05:04.078 LIB libspdk_event_vfu_tgt.a 00:05:04.078 LIB libspdk_event_sock.a 00:05:04.078 LIB libspdk_event_vmd.a 00:05:04.078 SO libspdk_event_keyring.so.1.0 00:05:04.078 LIB libspdk_event_iobuf.a 00:05:04.078 SO libspdk_event_fsdev.so.1.0 00:05:04.078 SO libspdk_event_vhost_blk.so.3.0 00:05:04.078 SO libspdk_event_scheduler.so.4.0 00:05:04.078 SO libspdk_event_vfu_tgt.so.3.0 00:05:04.078 SO libspdk_event_sock.so.5.0 00:05:04.078 SO libspdk_event_vmd.so.6.0 00:05:04.078 SO libspdk_event_iobuf.so.3.0 00:05:04.078 SYMLINK libspdk_event_keyring.so 00:05:04.078 SYMLINK libspdk_event_fsdev.so 00:05:04.078 SYMLINK libspdk_event_vhost_blk.so 00:05:04.078 SYMLINK libspdk_event_scheduler.so 00:05:04.078 SYMLINK libspdk_event_vfu_tgt.so 00:05:04.078 SYMLINK libspdk_event_sock.so 00:05:04.078 SYMLINK libspdk_event_vmd.so 00:05:04.078 SYMLINK libspdk_event_iobuf.so 00:05:04.336 CC module/event/subsystems/accel/accel.o 00:05:04.592 LIB libspdk_event_accel.a 00:05:04.593 SO libspdk_event_accel.so.6.0 00:05:04.593 SYMLINK libspdk_event_accel.so 00:05:04.850 CC module/event/subsystems/bdev/bdev.o 00:05:04.851 LIB libspdk_event_bdev.a 00:05:04.851 SO libspdk_event_bdev.so.6.0 00:05:05.108 SYMLINK libspdk_event_bdev.so 00:05:05.108 CC module/event/subsystems/scsi/scsi.o 00:05:05.108 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:05.108 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:05.108 CC module/event/subsystems/nbd/nbd.o 00:05:05.108 CC module/event/subsystems/ublk/ublk.o 00:05:05.366 LIB libspdk_event_nbd.a 00:05:05.366 LIB libspdk_event_ublk.a 00:05:05.366 LIB libspdk_event_scsi.a 00:05:05.366 SO libspdk_event_nbd.so.6.0 00:05:05.366 SO libspdk_event_ublk.so.3.0 00:05:05.366 SO libspdk_event_scsi.so.6.0 00:05:05.366 SYMLINK libspdk_event_ublk.so 00:05:05.366 SYMLINK libspdk_event_nbd.so 00:05:05.366 SYMLINK libspdk_event_scsi.so 00:05:05.366 LIB libspdk_event_nvmf.a 00:05:05.366 SO libspdk_event_nvmf.so.6.0 00:05:05.624 SYMLINK libspdk_event_nvmf.so 00:05:05.624 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:05.624 CC module/event/subsystems/iscsi/iscsi.o 00:05:05.624 LIB libspdk_event_vhost_scsi.a 00:05:05.882 SO libspdk_event_vhost_scsi.so.3.0 00:05:05.882 LIB libspdk_event_iscsi.a 00:05:05.882 SO libspdk_event_iscsi.so.6.0 00:05:05.882 SYMLINK libspdk_event_vhost_scsi.so 00:05:05.882 SYMLINK libspdk_event_iscsi.so 00:05:05.882 SO libspdk.so.6.0 00:05:05.882 SYMLINK libspdk.so 00:05:06.147 CC app/trace_record/trace_record.o 00:05:06.147 TEST_HEADER include/spdk/accel.h 00:05:06.147 TEST_HEADER include/spdk/accel_module.h 00:05:06.147 TEST_HEADER include/spdk/assert.h 00:05:06.147 TEST_HEADER include/spdk/barrier.h 00:05:06.147 CXX app/trace/trace.o 00:05:06.147 TEST_HEADER include/spdk/base64.h 00:05:06.147 TEST_HEADER include/spdk/bdev.h 00:05:06.147 TEST_HEADER include/spdk/bdev_module.h 00:05:06.147 CC app/spdk_top/spdk_top.o 00:05:06.147 TEST_HEADER include/spdk/bdev_zone.h 00:05:06.147 CC app/spdk_nvme_discover/discovery_aer.o 00:05:06.147 TEST_HEADER include/spdk/bit_array.h 00:05:06.147 TEST_HEADER include/spdk/bit_pool.h 00:05:06.147 CC test/rpc_client/rpc_client_test.o 00:05:06.147 TEST_HEADER include/spdk/blob_bdev.h 00:05:06.147 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:06.147 TEST_HEADER include/spdk/blob.h 00:05:06.147 TEST_HEADER include/spdk/blobfs.h 00:05:06.147 CC app/spdk_nvme_perf/perf.o 00:05:06.147 TEST_HEADER include/spdk/conf.h 00:05:06.147 CC app/spdk_lspci/spdk_lspci.o 00:05:06.147 TEST_HEADER include/spdk/config.h 00:05:06.147 TEST_HEADER include/spdk/cpuset.h 00:05:06.147 TEST_HEADER include/spdk/crc16.h 00:05:06.147 CC app/spdk_nvme_identify/identify.o 00:05:06.147 TEST_HEADER include/spdk/crc32.h 00:05:06.147 TEST_HEADER include/spdk/crc64.h 00:05:06.147 TEST_HEADER include/spdk/dif.h 00:05:06.147 TEST_HEADER include/spdk/dma.h 00:05:06.147 TEST_HEADER include/spdk/endian.h 00:05:06.147 TEST_HEADER include/spdk/env_dpdk.h 00:05:06.147 TEST_HEADER include/spdk/env.h 00:05:06.147 TEST_HEADER include/spdk/event.h 00:05:06.147 TEST_HEADER include/spdk/fd_group.h 00:05:06.147 TEST_HEADER include/spdk/fd.h 00:05:06.147 TEST_HEADER include/spdk/fsdev.h 00:05:06.147 TEST_HEADER include/spdk/file.h 00:05:06.147 TEST_HEADER include/spdk/fsdev_module.h 00:05:06.147 TEST_HEADER include/spdk/ftl.h 00:05:06.147 TEST_HEADER include/spdk/gpt_spec.h 00:05:06.147 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:06.147 TEST_HEADER include/spdk/hexlify.h 00:05:06.147 TEST_HEADER include/spdk/histogram_data.h 00:05:06.147 TEST_HEADER include/spdk/idxd.h 00:05:06.147 TEST_HEADER include/spdk/idxd_spec.h 00:05:06.147 TEST_HEADER include/spdk/init.h 00:05:06.147 TEST_HEADER include/spdk/ioat.h 00:05:06.147 TEST_HEADER include/spdk/ioat_spec.h 00:05:06.147 TEST_HEADER include/spdk/iscsi_spec.h 00:05:06.147 TEST_HEADER include/spdk/json.h 00:05:06.147 TEST_HEADER include/spdk/jsonrpc.h 00:05:06.147 TEST_HEADER include/spdk/keyring.h 00:05:06.147 TEST_HEADER include/spdk/keyring_module.h 00:05:06.147 TEST_HEADER include/spdk/likely.h 00:05:06.147 TEST_HEADER include/spdk/log.h 00:05:06.147 TEST_HEADER include/spdk/md5.h 00:05:06.147 TEST_HEADER include/spdk/lvol.h 00:05:06.147 TEST_HEADER include/spdk/memory.h 00:05:06.147 TEST_HEADER include/spdk/mmio.h 00:05:06.147 TEST_HEADER include/spdk/nbd.h 00:05:06.147 TEST_HEADER include/spdk/net.h 00:05:06.147 TEST_HEADER include/spdk/notify.h 00:05:06.147 TEST_HEADER include/spdk/nvme.h 00:05:06.147 TEST_HEADER include/spdk/nvme_intel.h 00:05:06.147 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:06.147 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:06.147 TEST_HEADER include/spdk/nvme_spec.h 00:05:06.147 TEST_HEADER include/spdk/nvme_zns.h 00:05:06.147 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:06.147 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:06.147 TEST_HEADER include/spdk/nvmf.h 00:05:06.147 TEST_HEADER include/spdk/nvmf_spec.h 00:05:06.147 TEST_HEADER include/spdk/nvmf_transport.h 00:05:06.147 TEST_HEADER include/spdk/opal.h 00:05:06.147 TEST_HEADER include/spdk/opal_spec.h 00:05:06.147 TEST_HEADER include/spdk/pci_ids.h 00:05:06.147 TEST_HEADER include/spdk/queue.h 00:05:06.147 TEST_HEADER include/spdk/pipe.h 00:05:06.147 TEST_HEADER include/spdk/reduce.h 00:05:06.147 TEST_HEADER include/spdk/rpc.h 00:05:06.147 TEST_HEADER include/spdk/scheduler.h 00:05:06.147 TEST_HEADER include/spdk/scsi.h 00:05:06.147 TEST_HEADER include/spdk/scsi_spec.h 00:05:06.147 TEST_HEADER include/spdk/sock.h 00:05:06.147 TEST_HEADER include/spdk/stdinc.h 00:05:06.147 TEST_HEADER include/spdk/string.h 00:05:06.147 TEST_HEADER include/spdk/thread.h 00:05:06.147 TEST_HEADER include/spdk/trace.h 00:05:06.147 TEST_HEADER include/spdk/trace_parser.h 00:05:06.147 TEST_HEADER include/spdk/tree.h 00:05:06.147 TEST_HEADER include/spdk/util.h 00:05:06.147 TEST_HEADER include/spdk/ublk.h 00:05:06.147 TEST_HEADER include/spdk/uuid.h 00:05:06.147 TEST_HEADER include/spdk/version.h 00:05:06.147 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:06.147 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:06.147 TEST_HEADER include/spdk/vhost.h 00:05:06.147 TEST_HEADER include/spdk/vmd.h 00:05:06.147 TEST_HEADER include/spdk/xor.h 00:05:06.147 TEST_HEADER include/spdk/zipf.h 00:05:06.147 CXX test/cpp_headers/accel.o 00:05:06.147 CXX test/cpp_headers/accel_module.o 00:05:06.147 CXX test/cpp_headers/assert.o 00:05:06.147 CXX test/cpp_headers/barrier.o 00:05:06.147 CXX test/cpp_headers/base64.o 00:05:06.147 CXX test/cpp_headers/bdev.o 00:05:06.147 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:06.147 CXX test/cpp_headers/bdev_module.o 00:05:06.147 CXX test/cpp_headers/bdev_zone.o 00:05:06.147 CXX test/cpp_headers/bit_array.o 00:05:06.147 CXX test/cpp_headers/bit_pool.o 00:05:06.147 CXX test/cpp_headers/blob_bdev.o 00:05:06.147 CXX test/cpp_headers/blobfs_bdev.o 00:05:06.147 CC app/spdk_dd/spdk_dd.o 00:05:06.147 CXX test/cpp_headers/blobfs.o 00:05:06.147 CXX test/cpp_headers/blob.o 00:05:06.147 CXX test/cpp_headers/conf.o 00:05:06.147 CXX test/cpp_headers/config.o 00:05:06.147 CXX test/cpp_headers/cpuset.o 00:05:06.147 CXX test/cpp_headers/crc16.o 00:05:06.147 CC app/iscsi_tgt/iscsi_tgt.o 00:05:06.147 CC app/nvmf_tgt/nvmf_main.o 00:05:06.147 CXX test/cpp_headers/crc32.o 00:05:06.147 CC examples/ioat/verify/verify.o 00:05:06.147 CC examples/ioat/perf/perf.o 00:05:06.147 CC examples/util/zipf/zipf.o 00:05:06.147 CC test/thread/poller_perf/poller_perf.o 00:05:06.147 CC app/spdk_tgt/spdk_tgt.o 00:05:06.147 CC test/app/jsoncat/jsoncat.o 00:05:06.147 CC test/app/stub/stub.o 00:05:06.147 CC test/env/vtophys/vtophys.o 00:05:06.411 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:06.411 CC test/app/histogram_perf/histogram_perf.o 00:05:06.411 CC test/env/pci/pci_ut.o 00:05:06.411 CC app/fio/nvme/fio_plugin.o 00:05:06.411 CC test/env/memory/memory_ut.o 00:05:06.411 CC test/app/bdev_svc/bdev_svc.o 00:05:06.411 CC test/dma/test_dma/test_dma.o 00:05:06.411 CC app/fio/bdev/fio_plugin.o 00:05:06.411 LINK spdk_lspci 00:05:06.411 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:06.411 CC test/env/mem_callbacks/mem_callbacks.o 00:05:06.411 LINK rpc_client_test 00:05:06.673 LINK spdk_nvme_discover 00:05:06.673 LINK poller_perf 00:05:06.673 LINK zipf 00:05:06.673 LINK jsoncat 00:05:06.673 LINK interrupt_tgt 00:05:06.673 LINK histogram_perf 00:05:06.673 LINK vtophys 00:05:06.673 LINK nvmf_tgt 00:05:06.673 CXX test/cpp_headers/crc64.o 00:05:06.673 CXX test/cpp_headers/dif.o 00:05:06.673 CXX test/cpp_headers/dma.o 00:05:06.673 CXX test/cpp_headers/endian.o 00:05:06.673 LINK spdk_trace_record 00:05:06.673 CXX test/cpp_headers/env_dpdk.o 00:05:06.673 CXX test/cpp_headers/env.o 00:05:06.673 CXX test/cpp_headers/event.o 00:05:06.673 CXX test/cpp_headers/fd_group.o 00:05:06.673 CXX test/cpp_headers/fd.o 00:05:06.673 CXX test/cpp_headers/file.o 00:05:06.673 CXX test/cpp_headers/fsdev.o 00:05:06.673 LINK env_dpdk_post_init 00:05:06.673 LINK iscsi_tgt 00:05:06.673 LINK stub 00:05:06.673 LINK ioat_perf 00:05:06.673 CXX test/cpp_headers/fsdev_module.o 00:05:06.673 CXX test/cpp_headers/ftl.o 00:05:06.673 CXX test/cpp_headers/fuse_dispatcher.o 00:05:06.673 CXX test/cpp_headers/gpt_spec.o 00:05:06.673 CXX test/cpp_headers/hexlify.o 00:05:06.673 LINK bdev_svc 00:05:06.673 LINK verify 00:05:06.673 LINK spdk_tgt 00:05:06.934 CXX test/cpp_headers/histogram_data.o 00:05:06.934 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:06.934 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:06.934 CXX test/cpp_headers/idxd.o 00:05:06.934 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:06.934 CXX test/cpp_headers/idxd_spec.o 00:05:06.934 CXX test/cpp_headers/init.o 00:05:06.934 CXX test/cpp_headers/ioat.o 00:05:06.934 CXX test/cpp_headers/ioat_spec.o 00:05:06.934 CXX test/cpp_headers/iscsi_spec.o 00:05:06.934 LINK spdk_dd 00:05:06.934 CXX test/cpp_headers/json.o 00:05:07.195 LINK spdk_trace 00:05:07.195 CXX test/cpp_headers/jsonrpc.o 00:05:07.195 CXX test/cpp_headers/keyring.o 00:05:07.195 CXX test/cpp_headers/keyring_module.o 00:05:07.195 CXX test/cpp_headers/likely.o 00:05:07.195 CXX test/cpp_headers/log.o 00:05:07.195 CXX test/cpp_headers/lvol.o 00:05:07.195 CXX test/cpp_headers/md5.o 00:05:07.195 CXX test/cpp_headers/memory.o 00:05:07.195 CXX test/cpp_headers/mmio.o 00:05:07.195 LINK pci_ut 00:05:07.195 CXX test/cpp_headers/nbd.o 00:05:07.195 CXX test/cpp_headers/net.o 00:05:07.195 CXX test/cpp_headers/notify.o 00:05:07.195 CXX test/cpp_headers/nvme.o 00:05:07.195 CXX test/cpp_headers/nvme_intel.o 00:05:07.195 CXX test/cpp_headers/nvme_ocssd.o 00:05:07.195 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:07.195 CXX test/cpp_headers/nvme_spec.o 00:05:07.195 CXX test/cpp_headers/nvme_zns.o 00:05:07.195 CXX test/cpp_headers/nvmf_cmd.o 00:05:07.195 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:07.195 CXX test/cpp_headers/nvmf.o 00:05:07.195 CC test/event/event_perf/event_perf.o 00:05:07.456 CC test/event/reactor/reactor.o 00:05:07.456 LINK nvme_fuzz 00:05:07.456 CXX test/cpp_headers/nvmf_spec.o 00:05:07.456 CC test/event/reactor_perf/reactor_perf.o 00:05:07.456 CXX test/cpp_headers/nvmf_transport.o 00:05:07.456 CXX test/cpp_headers/opal.o 00:05:07.456 CXX test/cpp_headers/opal_spec.o 00:05:07.456 LINK spdk_bdev 00:05:07.456 CXX test/cpp_headers/pci_ids.o 00:05:07.456 LINK spdk_nvme 00:05:07.456 CC examples/vmd/lsvmd/lsvmd.o 00:05:07.456 CC examples/sock/hello_world/hello_sock.o 00:05:07.456 CC test/event/app_repeat/app_repeat.o 00:05:07.456 LINK test_dma 00:05:07.456 CC examples/idxd/perf/perf.o 00:05:07.456 CC examples/vmd/led/led.o 00:05:07.456 CC examples/thread/thread/thread_ex.o 00:05:07.456 CC test/event/scheduler/scheduler.o 00:05:07.456 CXX test/cpp_headers/pipe.o 00:05:07.456 CXX test/cpp_headers/queue.o 00:05:07.456 CXX test/cpp_headers/reduce.o 00:05:07.456 CXX test/cpp_headers/rpc.o 00:05:07.456 CXX test/cpp_headers/scheduler.o 00:05:07.456 CXX test/cpp_headers/scsi.o 00:05:07.456 CXX test/cpp_headers/scsi_spec.o 00:05:07.456 CXX test/cpp_headers/sock.o 00:05:07.456 CXX test/cpp_headers/stdinc.o 00:05:07.456 CXX test/cpp_headers/string.o 00:05:07.719 CXX test/cpp_headers/thread.o 00:05:07.720 CXX test/cpp_headers/trace.o 00:05:07.720 CXX test/cpp_headers/trace_parser.o 00:05:07.720 CXX test/cpp_headers/tree.o 00:05:07.720 CXX test/cpp_headers/ublk.o 00:05:07.720 CXX test/cpp_headers/util.o 00:05:07.720 CXX test/cpp_headers/uuid.o 00:05:07.720 LINK event_perf 00:05:07.720 CXX test/cpp_headers/version.o 00:05:07.720 CXX test/cpp_headers/vfio_user_pci.o 00:05:07.720 CXX test/cpp_headers/vfio_user_spec.o 00:05:07.720 CXX test/cpp_headers/vhost.o 00:05:07.720 LINK reactor 00:05:07.720 LINK reactor_perf 00:05:07.720 CXX test/cpp_headers/vmd.o 00:05:07.720 CXX test/cpp_headers/xor.o 00:05:07.720 LINK lsvmd 00:05:07.720 LINK mem_callbacks 00:05:07.720 CXX test/cpp_headers/zipf.o 00:05:07.720 CC app/vhost/vhost.o 00:05:07.720 LINK spdk_nvme_perf 00:05:07.720 LINK vhost_fuzz 00:05:07.720 LINK app_repeat 00:05:07.720 LINK led 00:05:07.720 LINK spdk_nvme_identify 00:05:07.980 LINK spdk_top 00:05:07.980 LINK hello_sock 00:05:07.980 LINK scheduler 00:05:07.980 LINK thread 00:05:07.980 LINK idxd_perf 00:05:07.980 CC test/nvme/simple_copy/simple_copy.o 00:05:07.980 CC test/nvme/connect_stress/connect_stress.o 00:05:07.980 CC test/nvme/e2edp/nvme_dp.o 00:05:07.980 CC test/nvme/startup/startup.o 00:05:07.980 CC test/nvme/boot_partition/boot_partition.o 00:05:07.980 CC test/nvme/err_injection/err_injection.o 00:05:07.980 CC test/nvme/reserve/reserve.o 00:05:07.980 CC test/nvme/reset/reset.o 00:05:07.980 CC test/nvme/aer/aer.o 00:05:07.980 CC test/nvme/compliance/nvme_compliance.o 00:05:07.980 CC test/nvme/overhead/overhead.o 00:05:07.980 CC test/nvme/sgl/sgl.o 00:05:08.239 CC test/nvme/fused_ordering/fused_ordering.o 00:05:08.239 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:08.239 CC test/nvme/fdp/fdp.o 00:05:08.239 CC test/nvme/cuse/cuse.o 00:05:08.239 LINK vhost 00:05:08.239 CC test/blobfs/mkfs/mkfs.o 00:05:08.239 CC test/accel/dif/dif.o 00:05:08.239 CC test/lvol/esnap/esnap.o 00:05:08.239 LINK startup 00:05:08.239 LINK boot_partition 00:05:08.239 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:08.239 CC examples/nvme/abort/abort.o 00:05:08.239 CC examples/nvme/hello_world/hello_world.o 00:05:08.239 LINK err_injection 00:05:08.239 CC examples/nvme/reconnect/reconnect.o 00:05:08.239 LINK connect_stress 00:05:08.239 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:08.239 CC examples/nvme/arbitration/arbitration.o 00:05:08.239 CC examples/nvme/hotplug/hotplug.o 00:05:08.239 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:08.498 LINK doorbell_aers 00:05:08.498 LINK reserve 00:05:08.498 LINK simple_copy 00:05:08.498 LINK mkfs 00:05:08.498 LINK nvme_dp 00:05:08.498 LINK aer 00:05:08.498 CC examples/accel/perf/accel_perf.o 00:05:08.498 LINK memory_ut 00:05:08.498 LINK fused_ordering 00:05:08.498 CC examples/blob/hello_world/hello_blob.o 00:05:08.498 CC examples/blob/cli/blobcli.o 00:05:08.498 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:08.498 LINK overhead 00:05:08.498 LINK reset 00:05:08.498 LINK sgl 00:05:08.757 LINK hello_world 00:05:08.757 LINK nvme_compliance 00:05:08.757 LINK pmr_persistence 00:05:08.757 LINK cmb_copy 00:05:08.757 LINK fdp 00:05:08.757 LINK hotplug 00:05:08.757 LINK arbitration 00:05:08.757 LINK reconnect 00:05:09.016 LINK abort 00:05:09.016 LINK hello_blob 00:05:09.016 LINK nvme_manage 00:05:09.016 LINK hello_fsdev 00:05:09.016 LINK dif 00:05:09.016 LINK blobcli 00:05:09.274 LINK accel_perf 00:05:09.274 LINK iscsi_fuzz 00:05:09.274 CC test/bdev/bdevio/bdevio.o 00:05:09.532 CC examples/bdev/hello_world/hello_bdev.o 00:05:09.532 CC examples/bdev/bdevperf/bdevperf.o 00:05:09.790 LINK cuse 00:05:09.790 LINK bdevio 00:05:09.790 LINK hello_bdev 00:05:10.358 LINK bdevperf 00:05:10.616 CC examples/nvmf/nvmf/nvmf.o 00:05:11.183 LINK nvmf 00:05:13.724 LINK esnap 00:05:13.724 00:05:13.724 real 1m9.366s 00:05:13.724 user 11m53.455s 00:05:13.724 sys 2m38.786s 00:05:13.724 10:33:01 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:13.724 10:33:01 make -- common/autotest_common.sh@10 -- $ set +x 00:05:13.724 ************************************ 00:05:13.724 END TEST make 00:05:13.724 ************************************ 00:05:13.724 10:33:01 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:13.724 10:33:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:13.724 10:33:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:13.724 10:33:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:13.724 10:33:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:13.724 10:33:01 -- pm/common@44 -- $ pid=1144024 00:05:13.724 10:33:01 -- pm/common@50 -- $ kill -TERM 1144024 00:05:13.724 10:33:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:13.724 10:33:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:13.724 10:33:01 -- pm/common@44 -- $ pid=1144025 00:05:13.724 10:33:01 -- pm/common@50 -- $ kill -TERM 1144025 00:05:13.724 10:33:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:13.724 10:33:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:13.724 10:33:01 -- pm/common@44 -- $ pid=1144028 00:05:13.724 10:33:01 -- pm/common@50 -- $ kill -TERM 1144028 00:05:13.724 10:33:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:13.724 10:33:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:13.724 10:33:01 -- pm/common@44 -- $ pid=1144059 00:05:13.724 10:33:01 -- pm/common@50 -- $ sudo -E kill -TERM 1144059 00:05:13.724 10:33:01 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:13.724 10:33:01 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:13.724 10:33:01 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:13.724 10:33:01 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.724 10:33:01 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:13.724 10:33:01 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.724 10:33:01 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.724 10:33:01 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.724 10:33:01 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.724 10:33:01 -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.724 10:33:01 -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.724 10:33:01 -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.724 10:33:01 -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.724 10:33:01 -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.724 10:33:01 -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.724 10:33:01 -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.724 10:33:01 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.724 10:33:01 -- scripts/common.sh@344 -- # case "$op" in 00:05:13.724 10:33:01 -- scripts/common.sh@345 -- # : 1 00:05:13.724 10:33:01 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.724 10:33:01 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.724 10:33:01 -- scripts/common.sh@365 -- # decimal 1 00:05:13.724 10:33:01 -- scripts/common.sh@353 -- # local d=1 00:05:13.724 10:33:01 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.724 10:33:01 -- scripts/common.sh@355 -- # echo 1 00:05:13.724 10:33:01 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.724 10:33:01 -- scripts/common.sh@366 -- # decimal 2 00:05:13.724 10:33:01 -- scripts/common.sh@353 -- # local d=2 00:05:13.724 10:33:01 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.724 10:33:01 -- scripts/common.sh@355 -- # echo 2 00:05:13.724 10:33:01 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.724 10:33:01 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.724 10:33:01 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.724 10:33:01 -- scripts/common.sh@368 -- # return 0 00:05:13.724 10:33:01 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.724 10:33:01 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.724 --rc genhtml_branch_coverage=1 00:05:13.724 --rc genhtml_function_coverage=1 00:05:13.724 --rc genhtml_legend=1 00:05:13.724 --rc geninfo_all_blocks=1 00:05:13.724 --rc geninfo_unexecuted_blocks=1 00:05:13.724 00:05:13.724 ' 00:05:13.724 10:33:01 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.724 --rc genhtml_branch_coverage=1 00:05:13.724 --rc genhtml_function_coverage=1 00:05:13.724 --rc genhtml_legend=1 00:05:13.724 --rc geninfo_all_blocks=1 00:05:13.724 --rc geninfo_unexecuted_blocks=1 00:05:13.724 00:05:13.724 ' 00:05:13.724 10:33:01 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.724 --rc genhtml_branch_coverage=1 00:05:13.724 --rc genhtml_function_coverage=1 00:05:13.724 --rc genhtml_legend=1 00:05:13.724 --rc geninfo_all_blocks=1 00:05:13.724 --rc geninfo_unexecuted_blocks=1 00:05:13.724 00:05:13.724 ' 00:05:13.724 10:33:01 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.724 --rc genhtml_branch_coverage=1 00:05:13.724 --rc genhtml_function_coverage=1 00:05:13.724 --rc genhtml_legend=1 00:05:13.724 --rc geninfo_all_blocks=1 00:05:13.724 --rc geninfo_unexecuted_blocks=1 00:05:13.724 00:05:13.724 ' 00:05:13.724 10:33:01 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:13.724 10:33:01 -- nvmf/common.sh@7 -- # uname -s 00:05:13.724 10:33:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.724 10:33:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.724 10:33:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.724 10:33:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.724 10:33:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.724 10:33:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.724 10:33:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.724 10:33:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.724 10:33:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.724 10:33:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.724 10:33:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:13.724 10:33:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:13.724 10:33:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.724 10:33:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.724 10:33:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:13.724 10:33:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.724 10:33:01 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:13.724 10:33:01 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:13.724 10:33:01 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.724 10:33:01 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.724 10:33:01 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.724 10:33:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.724 10:33:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.724 10:33:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.725 10:33:01 -- paths/export.sh@5 -- # export PATH 00:05:13.725 10:33:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.725 10:33:01 -- nvmf/common.sh@51 -- # : 0 00:05:13.725 10:33:01 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:13.725 10:33:01 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:13.725 10:33:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.725 10:33:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.725 10:33:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.725 10:33:01 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:13.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:13.725 10:33:01 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:13.725 10:33:01 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:13.725 10:33:01 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:13.725 10:33:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:13.725 10:33:01 -- spdk/autotest.sh@32 -- # uname -s 00:05:13.985 10:33:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:13.985 10:33:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:13.985 10:33:01 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:13.985 10:33:01 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:13.985 10:33:01 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:13.985 10:33:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:13.985 10:33:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:13.985 10:33:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:13.985 10:33:01 -- spdk/autotest.sh@48 -- # udevadm_pid=1203557 00:05:13.985 10:33:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:13.985 10:33:01 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:13.985 10:33:01 -- pm/common@17 -- # local monitor 00:05:13.985 10:33:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:13.985 10:33:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:13.985 10:33:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:13.985 10:33:01 -- pm/common@21 -- # date +%s 00:05:13.985 10:33:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:13.985 10:33:01 -- pm/common@21 -- # date +%s 00:05:13.985 10:33:01 -- pm/common@25 -- # sleep 1 00:05:13.985 10:33:01 -- pm/common@21 -- # date +%s 00:05:13.985 10:33:01 -- pm/common@21 -- # date +%s 00:05:13.985 10:33:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732008781 00:05:13.985 10:33:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732008781 00:05:13.985 10:33:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732008781 00:05:13.985 10:33:01 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732008781 00:05:13.985 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732008781_collect-cpu-load.pm.log 00:05:13.985 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732008781_collect-vmstat.pm.log 00:05:13.985 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732008781_collect-cpu-temp.pm.log 00:05:13.985 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732008781_collect-bmc-pm.bmc.pm.log 00:05:14.924 10:33:02 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:14.924 10:33:02 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:14.924 10:33:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.924 10:33:02 -- common/autotest_common.sh@10 -- # set +x 00:05:14.924 10:33:02 -- spdk/autotest.sh@59 -- # create_test_list 00:05:14.924 10:33:02 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:14.924 10:33:02 -- common/autotest_common.sh@10 -- # set +x 00:05:14.924 10:33:02 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:14.924 10:33:02 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:14.924 10:33:02 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:14.924 10:33:02 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:14.924 10:33:02 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:14.924 10:33:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:14.924 10:33:02 -- common/autotest_common.sh@1457 -- # uname 00:05:14.924 10:33:02 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:14.924 10:33:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:14.924 10:33:02 -- common/autotest_common.sh@1477 -- # uname 00:05:14.924 10:33:02 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:14.924 10:33:02 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:14.924 10:33:02 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:14.924 lcov: LCOV version 1.15 00:05:14.924 10:33:02 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:36.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:36.844 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:54.920 10:33:40 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:54.920 10:33:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:54.920 10:33:40 -- common/autotest_common.sh@10 -- # set +x 00:05:54.920 10:33:40 -- spdk/autotest.sh@78 -- # rm -f 00:05:54.920 10:33:40 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:54.920 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:54.920 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:54.920 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:54.920 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:54.920 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:54.920 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:54.920 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:54.920 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:54.920 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:05:54.920 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:54.920 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:54.920 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:54.920 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:54.920 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:54.920 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:54.920 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:54.920 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:54.920 10:33:41 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:54.920 10:33:41 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:54.920 10:33:41 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:54.920 10:33:41 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:54.920 10:33:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:54.920 10:33:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:54.920 10:33:41 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:54.920 10:33:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:54.920 10:33:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:54.920 10:33:41 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:54.920 10:33:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:54.920 10:33:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:54.920 10:33:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:54.920 10:33:41 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:54.920 10:33:41 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:54.920 No valid GPT data, bailing 00:05:54.920 10:33:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:54.920 10:33:41 -- scripts/common.sh@394 -- # pt= 00:05:54.920 10:33:41 -- scripts/common.sh@395 -- # return 1 00:05:54.920 10:33:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:54.920 1+0 records in 00:05:54.920 1+0 records out 00:05:54.920 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00191281 s, 548 MB/s 00:05:54.920 10:33:41 -- spdk/autotest.sh@105 -- # sync 00:05:54.920 10:33:41 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:54.920 10:33:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:54.920 10:33:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:56.829 10:33:44 -- spdk/autotest.sh@111 -- # uname -s 00:05:56.829 10:33:44 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:56.829 10:33:44 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:56.829 10:33:44 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:57.764 Hugepages 00:05:57.764 node hugesize free / total 00:05:57.764 node0 1048576kB 0 / 0 00:05:57.764 node0 2048kB 0 / 0 00:05:57.764 node1 1048576kB 0 / 0 00:05:57.764 node1 2048kB 0 / 0 00:05:57.764 00:05:57.764 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:57.764 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:57.764 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:57.764 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:57.764 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:57.764 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:57.764 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:57.764 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:57.764 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:57.764 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:57.764 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:57.764 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:57.764 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:57.764 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:57.764 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:57.764 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:57.764 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:57.764 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:57.764 10:33:45 -- spdk/autotest.sh@117 -- # uname -s 00:05:57.764 10:33:45 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:57.764 10:33:45 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:57.764 10:33:45 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:59.142 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:59.142 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:59.142 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:59.142 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:59.142 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:59.142 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:59.142 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:59.142 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:59.142 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:59.142 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:59.142 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:59.142 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:59.142 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:59.142 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:59.142 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:59.142 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:00.081 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:06:00.341 10:33:47 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:01.336 10:33:48 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:01.336 10:33:48 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:01.336 10:33:48 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:01.336 10:33:48 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:01.336 10:33:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:01.336 10:33:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:01.336 10:33:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:01.336 10:33:48 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:01.336 10:33:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:01.336 10:33:48 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:01.336 10:33:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:06:01.336 10:33:48 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:02.745 Waiting for block devices as requested 00:06:02.745 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:06:02.745 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:06:02.745 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:06:02.745 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:06:02.745 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:06:03.004 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:06:03.004 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:06:03.004 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:06:03.004 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:06:03.264 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:06:03.264 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:06:03.523 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:06:03.523 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:06:03.523 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:06:03.523 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:06:03.782 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:06:03.782 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:06:03.782 10:33:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:03.782 10:33:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:06:03.782 10:33:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:06:03.782 10:33:51 -- common/autotest_common.sh@1487 -- # grep 0000:0b:00.0/nvme/nvme 00:06:03.782 10:33:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:06:03.782 10:33:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:06:03.782 10:33:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:06:03.782 10:33:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:03.782 10:33:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:03.782 10:33:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:03.782 10:33:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:03.782 10:33:51 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:03.782 10:33:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:03.782 10:33:51 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:06:03.782 10:33:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:03.782 10:33:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:03.782 10:33:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:03.782 10:33:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:03.782 10:33:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:03.782 10:33:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:03.782 10:33:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:03.782 10:33:51 -- common/autotest_common.sh@1543 -- # continue 00:06:03.782 10:33:51 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:03.782 10:33:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:03.782 10:33:51 -- common/autotest_common.sh@10 -- # set +x 00:06:04.041 10:33:51 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:04.041 10:33:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.041 10:33:51 -- common/autotest_common.sh@10 -- # set +x 00:06:04.041 10:33:51 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:05.420 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:05.420 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:05.420 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:05.420 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:05.420 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:05.420 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:05.420 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:05.420 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:05.420 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:05.420 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:05.420 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:05.420 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:05.420 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:05.420 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:05.420 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:05.420 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:06.359 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:06:06.359 10:33:53 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:06.359 10:33:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:06.359 10:33:53 -- common/autotest_common.sh@10 -- # set +x 00:06:06.359 10:33:53 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:06.359 10:33:53 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:06.359 10:33:53 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:06.359 10:33:53 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:06.359 10:33:53 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:06.359 10:33:53 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:06.359 10:33:53 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:06.359 10:33:53 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:06.359 10:33:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:06.359 10:33:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:06.359 10:33:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:06.359 10:33:53 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:06.359 10:33:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:06.359 10:33:53 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:06.359 10:33:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:06:06.359 10:33:53 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:06.359 10:33:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:06:06.359 10:33:53 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:06:06.359 10:33:53 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:06.359 10:33:53 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:06:06.359 10:33:53 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:06:06.359 10:33:53 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:0b:00.0 00:06:06.359 10:33:53 -- common/autotest_common.sh@1579 -- # [[ -z 0000:0b:00.0 ]] 00:06:06.359 10:33:53 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1214565 00:06:06.359 10:33:53 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:06.359 10:33:53 -- common/autotest_common.sh@1585 -- # waitforlisten 1214565 00:06:06.359 10:33:53 -- common/autotest_common.sh@835 -- # '[' -z 1214565 ']' 00:06:06.359 10:33:53 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.359 10:33:53 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.359 10:33:53 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.359 10:33:53 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.359 10:33:53 -- common/autotest_common.sh@10 -- # set +x 00:06:06.617 [2024-11-19 10:33:54.021256] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:06.617 [2024-11-19 10:33:54.021390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1214565 ] 00:06:06.617 [2024-11-19 10:33:54.086616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.617 [2024-11-19 10:33:54.139843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.876 10:33:54 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.876 10:33:54 -- common/autotest_common.sh@868 -- # return 0 00:06:06.876 10:33:54 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:06:06.876 10:33:54 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:06:06.876 10:33:54 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:06:10.159 nvme0n1 00:06:10.159 10:33:57 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:10.159 [2024-11-19 10:33:57.742632] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:06:10.159 [2024-11-19 10:33:57.742672] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:06:10.159 request: 00:06:10.159 { 00:06:10.159 "nvme_ctrlr_name": "nvme0", 00:06:10.159 "password": "test", 00:06:10.159 "method": "bdev_nvme_opal_revert", 00:06:10.159 "req_id": 1 00:06:10.159 } 00:06:10.159 Got JSON-RPC error response 00:06:10.159 response: 00:06:10.159 { 00:06:10.159 "code": -32603, 00:06:10.159 "message": "Internal error" 00:06:10.159 } 00:06:10.159 10:33:57 -- common/autotest_common.sh@1591 -- # true 00:06:10.159 10:33:57 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:06:10.159 10:33:57 -- common/autotest_common.sh@1595 -- # killprocess 1214565 00:06:10.159 10:33:57 -- common/autotest_common.sh@954 -- # '[' -z 1214565 ']' 00:06:10.159 10:33:57 -- common/autotest_common.sh@958 -- # kill -0 1214565 00:06:10.159 10:33:57 -- common/autotest_common.sh@959 -- # uname 00:06:10.159 10:33:57 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.159 10:33:57 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1214565 00:06:10.418 10:33:57 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.418 10:33:57 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.418 10:33:57 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1214565' 00:06:10.418 killing process with pid 1214565 00:06:10.418 10:33:57 -- common/autotest_common.sh@973 -- # kill 1214565 00:06:10.418 10:33:57 -- common/autotest_common.sh@978 -- # wait 1214565 00:06:12.328 10:33:59 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:12.328 10:33:59 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:12.328 10:33:59 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:12.328 10:33:59 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:12.328 10:33:59 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:12.328 10:33:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:12.328 10:33:59 -- common/autotest_common.sh@10 -- # set +x 00:06:12.328 10:33:59 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:12.328 10:33:59 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:12.328 10:33:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.328 10:33:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.328 10:33:59 -- common/autotest_common.sh@10 -- # set +x 00:06:12.328 ************************************ 00:06:12.328 START TEST env 00:06:12.328 ************************************ 00:06:12.328 10:33:59 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:12.328 * Looking for test storage... 00:06:12.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:12.328 10:33:59 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.328 10:33:59 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.328 10:33:59 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.328 10:33:59 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.328 10:33:59 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.328 10:33:59 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.328 10:33:59 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.328 10:33:59 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.328 10:33:59 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.328 10:33:59 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.328 10:33:59 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.328 10:33:59 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.328 10:33:59 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.328 10:33:59 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.328 10:33:59 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.328 10:33:59 env -- scripts/common.sh@344 -- # case "$op" in 00:06:12.328 10:33:59 env -- scripts/common.sh@345 -- # : 1 00:06:12.328 10:33:59 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.328 10:33:59 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.328 10:33:59 env -- scripts/common.sh@365 -- # decimal 1 00:06:12.328 10:33:59 env -- scripts/common.sh@353 -- # local d=1 00:06:12.328 10:33:59 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.328 10:33:59 env -- scripts/common.sh@355 -- # echo 1 00:06:12.328 10:33:59 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.328 10:33:59 env -- scripts/common.sh@366 -- # decimal 2 00:06:12.328 10:33:59 env -- scripts/common.sh@353 -- # local d=2 00:06:12.328 10:33:59 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.328 10:33:59 env -- scripts/common.sh@355 -- # echo 2 00:06:12.328 10:33:59 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.328 10:33:59 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.328 10:33:59 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.328 10:33:59 env -- scripts/common.sh@368 -- # return 0 00:06:12.328 10:33:59 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.328 10:33:59 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.328 --rc genhtml_branch_coverage=1 00:06:12.328 --rc genhtml_function_coverage=1 00:06:12.328 --rc genhtml_legend=1 00:06:12.328 --rc geninfo_all_blocks=1 00:06:12.328 --rc geninfo_unexecuted_blocks=1 00:06:12.328 00:06:12.328 ' 00:06:12.328 10:33:59 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.328 --rc genhtml_branch_coverage=1 00:06:12.328 --rc genhtml_function_coverage=1 00:06:12.328 --rc genhtml_legend=1 00:06:12.328 --rc geninfo_all_blocks=1 00:06:12.328 --rc geninfo_unexecuted_blocks=1 00:06:12.328 00:06:12.328 ' 00:06:12.328 10:33:59 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.328 --rc genhtml_branch_coverage=1 00:06:12.328 --rc genhtml_function_coverage=1 00:06:12.328 --rc genhtml_legend=1 00:06:12.328 --rc geninfo_all_blocks=1 00:06:12.328 --rc geninfo_unexecuted_blocks=1 00:06:12.328 00:06:12.328 ' 00:06:12.328 10:33:59 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.328 --rc genhtml_branch_coverage=1 00:06:12.328 --rc genhtml_function_coverage=1 00:06:12.328 --rc genhtml_legend=1 00:06:12.328 --rc geninfo_all_blocks=1 00:06:12.328 --rc geninfo_unexecuted_blocks=1 00:06:12.328 00:06:12.328 ' 00:06:12.328 10:33:59 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:12.328 10:33:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.328 10:33:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.328 10:33:59 env -- common/autotest_common.sh@10 -- # set +x 00:06:12.328 ************************************ 00:06:12.328 START TEST env_memory 00:06:12.328 ************************************ 00:06:12.328 10:33:59 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:12.328 00:06:12.328 00:06:12.329 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.329 http://cunit.sourceforge.net/ 00:06:12.329 00:06:12.329 00:06:12.329 Suite: memory 00:06:12.329 Test: alloc and free memory map ...[2024-11-19 10:33:59.763370] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:12.329 passed 00:06:12.329 Test: mem map translation ...[2024-11-19 10:33:59.784633] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:12.329 [2024-11-19 10:33:59.784656] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:12.329 [2024-11-19 10:33:59.784706] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:12.329 [2024-11-19 10:33:59.784719] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:12.329 passed 00:06:12.329 Test: mem map registration ...[2024-11-19 10:33:59.825458] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:12.329 [2024-11-19 10:33:59.825476] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:12.329 passed 00:06:12.329 Test: mem map adjacent registrations ...passed 00:06:12.329 00:06:12.329 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.329 suites 1 1 n/a 0 0 00:06:12.329 tests 4 4 4 0 0 00:06:12.329 asserts 152 152 152 0 n/a 00:06:12.329 00:06:12.329 Elapsed time = 0.143 seconds 00:06:12.329 00:06:12.329 real 0m0.152s 00:06:12.329 user 0m0.144s 00:06:12.329 sys 0m0.008s 00:06:12.329 10:33:59 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.329 10:33:59 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:12.329 ************************************ 00:06:12.329 END TEST env_memory 00:06:12.329 ************************************ 00:06:12.329 10:33:59 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:12.329 10:33:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.329 10:33:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.329 10:33:59 env -- common/autotest_common.sh@10 -- # set +x 00:06:12.329 ************************************ 00:06:12.329 START TEST env_vtophys 00:06:12.329 ************************************ 00:06:12.329 10:33:59 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:12.329 EAL: lib.eal log level changed from notice to debug 00:06:12.329 EAL: Detected lcore 0 as core 0 on socket 0 00:06:12.329 EAL: Detected lcore 1 as core 1 on socket 0 00:06:12.329 EAL: Detected lcore 2 as core 2 on socket 0 00:06:12.329 EAL: Detected lcore 3 as core 3 on socket 0 00:06:12.329 EAL: Detected lcore 4 as core 4 on socket 0 00:06:12.329 EAL: Detected lcore 5 as core 5 on socket 0 00:06:12.329 EAL: Detected lcore 6 as core 8 on socket 0 00:06:12.329 EAL: Detected lcore 7 as core 9 on socket 0 00:06:12.329 EAL: Detected lcore 8 as core 10 on socket 0 00:06:12.329 EAL: Detected lcore 9 as core 11 on socket 0 00:06:12.329 EAL: Detected lcore 10 as core 12 on socket 0 00:06:12.329 EAL: Detected lcore 11 as core 13 on socket 0 00:06:12.329 EAL: Detected lcore 12 as core 0 on socket 1 00:06:12.329 EAL: Detected lcore 13 as core 1 on socket 1 00:06:12.329 EAL: Detected lcore 14 as core 2 on socket 1 00:06:12.329 EAL: Detected lcore 15 as core 3 on socket 1 00:06:12.329 EAL: Detected lcore 16 as core 4 on socket 1 00:06:12.329 EAL: Detected lcore 17 as core 5 on socket 1 00:06:12.329 EAL: Detected lcore 18 as core 8 on socket 1 00:06:12.329 EAL: Detected lcore 19 as core 9 on socket 1 00:06:12.329 EAL: Detected lcore 20 as core 10 on socket 1 00:06:12.329 EAL: Detected lcore 21 as core 11 on socket 1 00:06:12.329 EAL: Detected lcore 22 as core 12 on socket 1 00:06:12.329 EAL: Detected lcore 23 as core 13 on socket 1 00:06:12.329 EAL: Detected lcore 24 as core 0 on socket 0 00:06:12.329 EAL: Detected lcore 25 as core 1 on socket 0 00:06:12.329 EAL: Detected lcore 26 as core 2 on socket 0 00:06:12.329 EAL: Detected lcore 27 as core 3 on socket 0 00:06:12.329 EAL: Detected lcore 28 as core 4 on socket 0 00:06:12.329 EAL: Detected lcore 29 as core 5 on socket 0 00:06:12.329 EAL: Detected lcore 30 as core 8 on socket 0 00:06:12.329 EAL: Detected lcore 31 as core 9 on socket 0 00:06:12.329 EAL: Detected lcore 32 as core 10 on socket 0 00:06:12.329 EAL: Detected lcore 33 as core 11 on socket 0 00:06:12.329 EAL: Detected lcore 34 as core 12 on socket 0 00:06:12.329 EAL: Detected lcore 35 as core 13 on socket 0 00:06:12.329 EAL: Detected lcore 36 as core 0 on socket 1 00:06:12.329 EAL: Detected lcore 37 as core 1 on socket 1 00:06:12.329 EAL: Detected lcore 38 as core 2 on socket 1 00:06:12.329 EAL: Detected lcore 39 as core 3 on socket 1 00:06:12.329 EAL: Detected lcore 40 as core 4 on socket 1 00:06:12.329 EAL: Detected lcore 41 as core 5 on socket 1 00:06:12.329 EAL: Detected lcore 42 as core 8 on socket 1 00:06:12.329 EAL: Detected lcore 43 as core 9 on socket 1 00:06:12.329 EAL: Detected lcore 44 as core 10 on socket 1 00:06:12.329 EAL: Detected lcore 45 as core 11 on socket 1 00:06:12.329 EAL: Detected lcore 46 as core 12 on socket 1 00:06:12.329 EAL: Detected lcore 47 as core 13 on socket 1 00:06:12.329 EAL: Maximum logical cores by configuration: 128 00:06:12.329 EAL: Detected CPU lcores: 48 00:06:12.329 EAL: Detected NUMA nodes: 2 00:06:12.329 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:12.329 EAL: Detected shared linkage of DPDK 00:06:12.588 EAL: No shared files mode enabled, IPC will be disabled 00:06:12.588 EAL: Bus pci wants IOVA as 'DC' 00:06:12.588 EAL: Buses did not request a specific IOVA mode. 00:06:12.588 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:12.588 EAL: Selected IOVA mode 'VA' 00:06:12.588 EAL: Probing VFIO support... 00:06:12.588 EAL: IOMMU type 1 (Type 1) is supported 00:06:12.588 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:12.588 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:12.588 EAL: VFIO support initialized 00:06:12.588 EAL: Ask a virtual area of 0x2e000 bytes 00:06:12.588 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:12.588 EAL: Setting up physically contiguous memory... 00:06:12.588 EAL: Setting maximum number of open files to 524288 00:06:12.588 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:12.588 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:12.588 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:12.588 EAL: Ask a virtual area of 0x61000 bytes 00:06:12.588 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:12.588 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:12.588 EAL: Ask a virtual area of 0x400000000 bytes 00:06:12.588 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:12.588 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:12.588 EAL: Ask a virtual area of 0x61000 bytes 00:06:12.588 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:12.588 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:12.588 EAL: Ask a virtual area of 0x400000000 bytes 00:06:12.588 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:12.588 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:12.588 EAL: Ask a virtual area of 0x61000 bytes 00:06:12.588 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:12.588 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:12.588 EAL: Ask a virtual area of 0x400000000 bytes 00:06:12.588 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:12.588 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:12.588 EAL: Ask a virtual area of 0x61000 bytes 00:06:12.588 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:12.588 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:12.588 EAL: Ask a virtual area of 0x400000000 bytes 00:06:12.588 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:12.588 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:12.588 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:12.588 EAL: Ask a virtual area of 0x61000 bytes 00:06:12.588 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:12.588 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:12.588 EAL: Ask a virtual area of 0x400000000 bytes 00:06:12.588 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:12.588 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:12.588 EAL: Ask a virtual area of 0x61000 bytes 00:06:12.588 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:12.588 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:12.588 EAL: Ask a virtual area of 0x400000000 bytes 00:06:12.588 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:12.588 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:12.588 EAL: Ask a virtual area of 0x61000 bytes 00:06:12.588 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:12.588 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:12.588 EAL: Ask a virtual area of 0x400000000 bytes 00:06:12.588 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:12.588 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:12.588 EAL: Ask a virtual area of 0x61000 bytes 00:06:12.588 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:12.588 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:12.588 EAL: Ask a virtual area of 0x400000000 bytes 00:06:12.588 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:12.588 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:12.588 EAL: Hugepages will be freed exactly as allocated. 00:06:12.588 EAL: No shared files mode enabled, IPC is disabled 00:06:12.588 EAL: No shared files mode enabled, IPC is disabled 00:06:12.588 EAL: TSC frequency is ~2700000 KHz 00:06:12.588 EAL: Main lcore 0 is ready (tid=7f320c010a00;cpuset=[0]) 00:06:12.588 EAL: Trying to obtain current memory policy. 00:06:12.588 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.588 EAL: Restoring previous memory policy: 0 00:06:12.588 EAL: request: mp_malloc_sync 00:06:12.588 EAL: No shared files mode enabled, IPC is disabled 00:06:12.588 EAL: Heap on socket 0 was expanded by 2MB 00:06:12.588 EAL: No shared files mode enabled, IPC is disabled 00:06:12.589 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:12.589 EAL: Mem event callback 'spdk:(nil)' registered 00:06:12.589 00:06:12.589 00:06:12.589 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.589 http://cunit.sourceforge.net/ 00:06:12.589 00:06:12.589 00:06:12.589 Suite: components_suite 00:06:12.589 Test: vtophys_malloc_test ...passed 00:06:12.589 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:12.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.589 EAL: Restoring previous memory policy: 4 00:06:12.589 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.589 EAL: request: mp_malloc_sync 00:06:12.589 EAL: No shared files mode enabled, IPC is disabled 00:06:12.589 EAL: Heap on socket 0 was expanded by 4MB 00:06:12.589 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.589 EAL: request: mp_malloc_sync 00:06:12.589 EAL: No shared files mode enabled, IPC is disabled 00:06:12.589 EAL: Heap on socket 0 was shrunk by 4MB 00:06:12.589 EAL: Trying to obtain current memory policy. 00:06:12.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.589 EAL: Restoring previous memory policy: 4 00:06:12.589 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.589 EAL: request: mp_malloc_sync 00:06:12.589 EAL: No shared files mode enabled, IPC is disabled 00:06:12.589 EAL: Heap on socket 0 was expanded by 6MB 00:06:12.589 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.589 EAL: request: mp_malloc_sync 00:06:12.589 EAL: No shared files mode enabled, IPC is disabled 00:06:12.589 EAL: Heap on socket 0 was shrunk by 6MB 00:06:12.589 EAL: Trying to obtain current memory policy. 00:06:12.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.589 EAL: Restoring previous memory policy: 4 00:06:12.589 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.589 EAL: request: mp_malloc_sync 00:06:12.589 EAL: No shared files mode enabled, IPC is disabled 00:06:12.589 EAL: Heap on socket 0 was expanded by 10MB 00:06:12.589 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.589 EAL: request: mp_malloc_sync 00:06:12.589 EAL: No shared files mode enabled, IPC is disabled 00:06:12.589 EAL: Heap on socket 0 was shrunk by 10MB 00:06:12.589 EAL: Trying to obtain current memory policy. 00:06:12.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.589 EAL: Restoring previous memory policy: 4 00:06:12.589 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.589 EAL: request: mp_malloc_sync 00:06:12.589 EAL: No shared files mode enabled, IPC is disabled 00:06:12.589 EAL: Heap on socket 0 was expanded by 18MB 00:06:12.589 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.589 EAL: request: mp_malloc_sync 00:06:12.589 EAL: No shared files mode enabled, IPC is disabled 00:06:12.589 EAL: Heap on socket 0 was shrunk by 18MB 00:06:12.589 EAL: Trying to obtain current memory policy. 00:06:12.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.589 EAL: Restoring previous memory policy: 4 00:06:12.589 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.589 EAL: request: mp_malloc_sync 00:06:12.589 EAL: No shared files mode enabled, IPC is disabled 00:06:12.589 EAL: Heap on socket 0 was expanded by 34MB 00:06:12.589 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.589 EAL: request: mp_malloc_sync 00:06:12.589 EAL: No shared files mode enabled, IPC is disabled 00:06:12.589 EAL: Heap on socket 0 was shrunk by 34MB 00:06:12.589 EAL: Trying to obtain current memory policy. 00:06:12.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.589 EAL: Restoring previous memory policy: 4 00:06:12.589 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.589 EAL: request: mp_malloc_sync 00:06:12.589 EAL: No shared files mode enabled, IPC is disabled 00:06:12.589 EAL: Heap on socket 0 was expanded by 66MB 00:06:12.589 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.589 EAL: request: mp_malloc_sync 00:06:12.589 EAL: No shared files mode enabled, IPC is disabled 00:06:12.589 EAL: Heap on socket 0 was shrunk by 66MB 00:06:12.589 EAL: Trying to obtain current memory policy. 00:06:12.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.589 EAL: Restoring previous memory policy: 4 00:06:12.589 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.589 EAL: request: mp_malloc_sync 00:06:12.589 EAL: No shared files mode enabled, IPC is disabled 00:06:12.589 EAL: Heap on socket 0 was expanded by 130MB 00:06:12.589 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.589 EAL: request: mp_malloc_sync 00:06:12.589 EAL: No shared files mode enabled, IPC is disabled 00:06:12.589 EAL: Heap on socket 0 was shrunk by 130MB 00:06:12.589 EAL: Trying to obtain current memory policy. 00:06:12.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.847 EAL: Restoring previous memory policy: 4 00:06:12.847 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.847 EAL: request: mp_malloc_sync 00:06:12.847 EAL: No shared files mode enabled, IPC is disabled 00:06:12.847 EAL: Heap on socket 0 was expanded by 258MB 00:06:12.847 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.847 EAL: request: mp_malloc_sync 00:06:12.847 EAL: No shared files mode enabled, IPC is disabled 00:06:12.847 EAL: Heap on socket 0 was shrunk by 258MB 00:06:12.847 EAL: Trying to obtain current memory policy. 00:06:12.847 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.106 EAL: Restoring previous memory policy: 4 00:06:13.106 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.106 EAL: request: mp_malloc_sync 00:06:13.106 EAL: No shared files mode enabled, IPC is disabled 00:06:13.106 EAL: Heap on socket 0 was expanded by 514MB 00:06:13.106 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.106 EAL: request: mp_malloc_sync 00:06:13.106 EAL: No shared files mode enabled, IPC is disabled 00:06:13.106 EAL: Heap on socket 0 was shrunk by 514MB 00:06:13.106 EAL: Trying to obtain current memory policy. 00:06:13.106 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.364 EAL: Restoring previous memory policy: 4 00:06:13.364 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.364 EAL: request: mp_malloc_sync 00:06:13.364 EAL: No shared files mode enabled, IPC is disabled 00:06:13.364 EAL: Heap on socket 0 was expanded by 1026MB 00:06:13.621 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.880 EAL: request: mp_malloc_sync 00:06:13.880 EAL: No shared files mode enabled, IPC is disabled 00:06:13.880 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:13.880 passed 00:06:13.880 00:06:13.880 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.880 suites 1 1 n/a 0 0 00:06:13.880 tests 2 2 2 0 0 00:06:13.880 asserts 497 497 497 0 n/a 00:06:13.880 00:06:13.880 Elapsed time = 1.356 seconds 00:06:13.880 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.880 EAL: request: mp_malloc_sync 00:06:13.880 EAL: No shared files mode enabled, IPC is disabled 00:06:13.880 EAL: Heap on socket 0 was shrunk by 2MB 00:06:13.880 EAL: No shared files mode enabled, IPC is disabled 00:06:13.880 EAL: No shared files mode enabled, IPC is disabled 00:06:13.880 EAL: No shared files mode enabled, IPC is disabled 00:06:13.880 00:06:13.880 real 0m1.483s 00:06:13.880 user 0m0.857s 00:06:13.880 sys 0m0.588s 00:06:13.880 10:34:01 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.880 10:34:01 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:13.880 ************************************ 00:06:13.880 END TEST env_vtophys 00:06:13.880 ************************************ 00:06:13.880 10:34:01 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:13.880 10:34:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.880 10:34:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.880 10:34:01 env -- common/autotest_common.sh@10 -- # set +x 00:06:13.880 ************************************ 00:06:13.880 START TEST env_pci 00:06:13.880 ************************************ 00:06:13.880 10:34:01 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:13.880 00:06:13.880 00:06:13.880 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.880 http://cunit.sourceforge.net/ 00:06:13.880 00:06:13.880 00:06:13.880 Suite: pci 00:06:13.880 Test: pci_hook ...[2024-11-19 10:34:01.472439] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1215468 has claimed it 00:06:13.880 EAL: Cannot find device (10000:00:01.0) 00:06:13.880 EAL: Failed to attach device on primary process 00:06:13.880 passed 00:06:13.880 00:06:13.880 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.880 suites 1 1 n/a 0 0 00:06:13.880 tests 1 1 1 0 0 00:06:13.880 asserts 25 25 25 0 n/a 00:06:13.881 00:06:13.881 Elapsed time = 0.022 seconds 00:06:13.881 00:06:13.881 real 0m0.035s 00:06:13.881 user 0m0.010s 00:06:13.881 sys 0m0.025s 00:06:13.881 10:34:01 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.881 10:34:01 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:13.881 ************************************ 00:06:13.881 END TEST env_pci 00:06:13.881 ************************************ 00:06:14.140 10:34:01 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:14.140 10:34:01 env -- env/env.sh@15 -- # uname 00:06:14.140 10:34:01 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:14.140 10:34:01 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:14.140 10:34:01 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:14.140 10:34:01 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:14.140 10:34:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.140 10:34:01 env -- common/autotest_common.sh@10 -- # set +x 00:06:14.140 ************************************ 00:06:14.140 START TEST env_dpdk_post_init 00:06:14.140 ************************************ 00:06:14.140 10:34:01 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:14.140 EAL: Detected CPU lcores: 48 00:06:14.140 EAL: Detected NUMA nodes: 2 00:06:14.140 EAL: Detected shared linkage of DPDK 00:06:14.140 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:14.140 EAL: Selected IOVA mode 'VA' 00:06:14.140 EAL: VFIO support initialized 00:06:14.140 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:14.140 EAL: Using IOMMU type 1 (Type 1) 00:06:14.140 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:06:14.140 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:06:14.140 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:06:14.140 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:06:14.140 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:06:14.140 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:06:14.140 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:06:14.140 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:06:15.077 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:06:15.077 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:06:15.077 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:06:15.077 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:06:15.077 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:06:15.077 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:06:15.077 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:06:15.077 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:06:15.077 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:06:18.357 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:06:18.357 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:06:18.357 Starting DPDK initialization... 00:06:18.357 Starting SPDK post initialization... 00:06:18.357 SPDK NVMe probe 00:06:18.357 Attaching to 0000:0b:00.0 00:06:18.357 Attached to 0000:0b:00.0 00:06:18.357 Cleaning up... 00:06:18.357 00:06:18.357 real 0m4.348s 00:06:18.357 user 0m2.962s 00:06:18.357 sys 0m0.445s 00:06:18.357 10:34:05 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.357 10:34:05 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:18.357 ************************************ 00:06:18.357 END TEST env_dpdk_post_init 00:06:18.357 ************************************ 00:06:18.357 10:34:05 env -- env/env.sh@26 -- # uname 00:06:18.357 10:34:05 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:18.357 10:34:05 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:18.357 10:34:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.357 10:34:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.357 10:34:05 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.357 ************************************ 00:06:18.357 START TEST env_mem_callbacks 00:06:18.357 ************************************ 00:06:18.357 10:34:05 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:18.357 EAL: Detected CPU lcores: 48 00:06:18.357 EAL: Detected NUMA nodes: 2 00:06:18.357 EAL: Detected shared linkage of DPDK 00:06:18.357 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:18.616 EAL: Selected IOVA mode 'VA' 00:06:18.616 EAL: VFIO support initialized 00:06:18.616 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:18.616 00:06:18.616 00:06:18.616 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.616 http://cunit.sourceforge.net/ 00:06:18.616 00:06:18.616 00:06:18.616 Suite: memory 00:06:18.616 Test: test ... 00:06:18.616 register 0x200000200000 2097152 00:06:18.616 malloc 3145728 00:06:18.616 register 0x200000400000 4194304 00:06:18.616 buf 0x200000500000 len 3145728 PASSED 00:06:18.616 malloc 64 00:06:18.616 buf 0x2000004fff40 len 64 PASSED 00:06:18.616 malloc 4194304 00:06:18.616 register 0x200000800000 6291456 00:06:18.616 buf 0x200000a00000 len 4194304 PASSED 00:06:18.616 free 0x200000500000 3145728 00:06:18.616 free 0x2000004fff40 64 00:06:18.616 unregister 0x200000400000 4194304 PASSED 00:06:18.616 free 0x200000a00000 4194304 00:06:18.616 unregister 0x200000800000 6291456 PASSED 00:06:18.616 malloc 8388608 00:06:18.616 register 0x200000400000 10485760 00:06:18.616 buf 0x200000600000 len 8388608 PASSED 00:06:18.616 free 0x200000600000 8388608 00:06:18.616 unregister 0x200000400000 10485760 PASSED 00:06:18.616 passed 00:06:18.616 00:06:18.616 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.616 suites 1 1 n/a 0 0 00:06:18.616 tests 1 1 1 0 0 00:06:18.616 asserts 15 15 15 0 n/a 00:06:18.616 00:06:18.616 Elapsed time = 0.005 seconds 00:06:18.616 00:06:18.616 real 0m0.049s 00:06:18.616 user 0m0.013s 00:06:18.616 sys 0m0.035s 00:06:18.616 10:34:06 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.616 10:34:06 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:18.616 ************************************ 00:06:18.616 END TEST env_mem_callbacks 00:06:18.616 ************************************ 00:06:18.616 00:06:18.616 real 0m6.465s 00:06:18.616 user 0m4.172s 00:06:18.616 sys 0m1.335s 00:06:18.616 10:34:06 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.616 10:34:06 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.616 ************************************ 00:06:18.616 END TEST env 00:06:18.616 ************************************ 00:06:18.616 10:34:06 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:18.616 10:34:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.616 10:34:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.616 10:34:06 -- common/autotest_common.sh@10 -- # set +x 00:06:18.616 ************************************ 00:06:18.616 START TEST rpc 00:06:18.616 ************************************ 00:06:18.616 10:34:06 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:18.616 * Looking for test storage... 00:06:18.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:18.616 10:34:06 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.616 10:34:06 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.616 10:34:06 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.616 10:34:06 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.616 10:34:06 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.616 10:34:06 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.616 10:34:06 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.616 10:34:06 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.616 10:34:06 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.616 10:34:06 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.616 10:34:06 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.616 10:34:06 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.616 10:34:06 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.616 10:34:06 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.616 10:34:06 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.616 10:34:06 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:18.616 10:34:06 rpc -- scripts/common.sh@345 -- # : 1 00:06:18.617 10:34:06 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.617 10:34:06 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.617 10:34:06 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:18.617 10:34:06 rpc -- scripts/common.sh@353 -- # local d=1 00:06:18.617 10:34:06 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.617 10:34:06 rpc -- scripts/common.sh@355 -- # echo 1 00:06:18.617 10:34:06 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.617 10:34:06 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:18.617 10:34:06 rpc -- scripts/common.sh@353 -- # local d=2 00:06:18.617 10:34:06 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.617 10:34:06 rpc -- scripts/common.sh@355 -- # echo 2 00:06:18.617 10:34:06 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.617 10:34:06 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.617 10:34:06 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.617 10:34:06 rpc -- scripts/common.sh@368 -- # return 0 00:06:18.617 10:34:06 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.617 10:34:06 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.617 --rc genhtml_branch_coverage=1 00:06:18.617 --rc genhtml_function_coverage=1 00:06:18.617 --rc genhtml_legend=1 00:06:18.617 --rc geninfo_all_blocks=1 00:06:18.617 --rc geninfo_unexecuted_blocks=1 00:06:18.617 00:06:18.617 ' 00:06:18.617 10:34:06 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.617 --rc genhtml_branch_coverage=1 00:06:18.617 --rc genhtml_function_coverage=1 00:06:18.617 --rc genhtml_legend=1 00:06:18.617 --rc geninfo_all_blocks=1 00:06:18.617 --rc geninfo_unexecuted_blocks=1 00:06:18.617 00:06:18.617 ' 00:06:18.617 10:34:06 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.617 --rc genhtml_branch_coverage=1 00:06:18.617 --rc genhtml_function_coverage=1 00:06:18.617 --rc genhtml_legend=1 00:06:18.617 --rc geninfo_all_blocks=1 00:06:18.617 --rc geninfo_unexecuted_blocks=1 00:06:18.617 00:06:18.617 ' 00:06:18.617 10:34:06 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.617 --rc genhtml_branch_coverage=1 00:06:18.617 --rc genhtml_function_coverage=1 00:06:18.617 --rc genhtml_legend=1 00:06:18.617 --rc geninfo_all_blocks=1 00:06:18.617 --rc geninfo_unexecuted_blocks=1 00:06:18.617 00:06:18.617 ' 00:06:18.617 10:34:06 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1216215 00:06:18.617 10:34:06 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:18.617 10:34:06 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:18.617 10:34:06 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1216215 00:06:18.617 10:34:06 rpc -- common/autotest_common.sh@835 -- # '[' -z 1216215 ']' 00:06:18.617 10:34:06 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.617 10:34:06 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.617 10:34:06 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.617 10:34:06 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.617 10:34:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.876 [2024-11-19 10:34:06.266235] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:18.876 [2024-11-19 10:34:06.266338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1216215 ] 00:06:18.876 [2024-11-19 10:34:06.331081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.876 [2024-11-19 10:34:06.388253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:18.876 [2024-11-19 10:34:06.388315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1216215' to capture a snapshot of events at runtime. 00:06:18.876 [2024-11-19 10:34:06.388345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:18.876 [2024-11-19 10:34:06.388356] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:18.876 [2024-11-19 10:34:06.388366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1216215 for offline analysis/debug. 00:06:18.876 [2024-11-19 10:34:06.388955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.134 10:34:06 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.134 10:34:06 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:19.134 10:34:06 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:19.134 10:34:06 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:19.134 10:34:06 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:19.134 10:34:06 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:19.134 10:34:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.134 10:34:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.134 10:34:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.134 ************************************ 00:06:19.134 START TEST rpc_integrity 00:06:19.134 ************************************ 00:06:19.134 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:19.134 10:34:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:19.134 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.134 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.134 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.134 10:34:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:19.134 10:34:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:19.134 10:34:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:19.134 10:34:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:19.134 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.134 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.134 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.134 10:34:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:19.134 10:34:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:19.134 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.134 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.393 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.393 10:34:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:19.393 { 00:06:19.393 "name": "Malloc0", 00:06:19.393 "aliases": [ 00:06:19.393 "4e247d99-0859-4391-b262-a4acea00c64f" 00:06:19.393 ], 00:06:19.393 "product_name": "Malloc disk", 00:06:19.393 "block_size": 512, 00:06:19.393 "num_blocks": 16384, 00:06:19.393 "uuid": "4e247d99-0859-4391-b262-a4acea00c64f", 00:06:19.393 "assigned_rate_limits": { 00:06:19.393 "rw_ios_per_sec": 0, 00:06:19.393 "rw_mbytes_per_sec": 0, 00:06:19.393 "r_mbytes_per_sec": 0, 00:06:19.393 "w_mbytes_per_sec": 0 00:06:19.393 }, 00:06:19.393 "claimed": false, 00:06:19.393 "zoned": false, 00:06:19.393 "supported_io_types": { 00:06:19.393 "read": true, 00:06:19.393 "write": true, 00:06:19.393 "unmap": true, 00:06:19.393 "flush": true, 00:06:19.393 "reset": true, 00:06:19.393 "nvme_admin": false, 00:06:19.393 "nvme_io": false, 00:06:19.393 "nvme_io_md": false, 00:06:19.393 "write_zeroes": true, 00:06:19.393 "zcopy": true, 00:06:19.393 "get_zone_info": false, 00:06:19.393 "zone_management": false, 00:06:19.393 "zone_append": false, 00:06:19.393 "compare": false, 00:06:19.393 "compare_and_write": false, 00:06:19.393 "abort": true, 00:06:19.393 "seek_hole": false, 00:06:19.393 "seek_data": false, 00:06:19.393 "copy": true, 00:06:19.393 "nvme_iov_md": false 00:06:19.393 }, 00:06:19.393 "memory_domains": [ 00:06:19.393 { 00:06:19.393 "dma_device_id": "system", 00:06:19.393 "dma_device_type": 1 00:06:19.393 }, 00:06:19.393 { 00:06:19.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.393 "dma_device_type": 2 00:06:19.393 } 00:06:19.393 ], 00:06:19.393 "driver_specific": {} 00:06:19.393 } 00:06:19.393 ]' 00:06:19.393 10:34:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:19.393 10:34:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:19.393 10:34:06 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:19.393 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.393 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.393 [2024-11-19 10:34:06.797725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:19.393 [2024-11-19 10:34:06.797777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:19.393 [2024-11-19 10:34:06.797799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16f0750 00:06:19.393 [2024-11-19 10:34:06.797812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:19.393 [2024-11-19 10:34:06.799134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:19.393 [2024-11-19 10:34:06.799156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:19.393 Passthru0 00:06:19.393 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.393 10:34:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:19.393 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.393 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.393 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.393 10:34:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:19.393 { 00:06:19.393 "name": "Malloc0", 00:06:19.393 "aliases": [ 00:06:19.393 "4e247d99-0859-4391-b262-a4acea00c64f" 00:06:19.393 ], 00:06:19.393 "product_name": "Malloc disk", 00:06:19.393 "block_size": 512, 00:06:19.393 "num_blocks": 16384, 00:06:19.393 "uuid": "4e247d99-0859-4391-b262-a4acea00c64f", 00:06:19.393 "assigned_rate_limits": { 00:06:19.393 "rw_ios_per_sec": 0, 00:06:19.393 "rw_mbytes_per_sec": 0, 00:06:19.393 "r_mbytes_per_sec": 0, 00:06:19.393 "w_mbytes_per_sec": 0 00:06:19.393 }, 00:06:19.393 "claimed": true, 00:06:19.393 "claim_type": "exclusive_write", 00:06:19.393 "zoned": false, 00:06:19.393 "supported_io_types": { 00:06:19.393 "read": true, 00:06:19.393 "write": true, 00:06:19.393 "unmap": true, 00:06:19.393 "flush": true, 00:06:19.393 "reset": true, 00:06:19.393 "nvme_admin": false, 00:06:19.393 "nvme_io": false, 00:06:19.393 "nvme_io_md": false, 00:06:19.393 "write_zeroes": true, 00:06:19.393 "zcopy": true, 00:06:19.393 "get_zone_info": false, 00:06:19.393 "zone_management": false, 00:06:19.393 "zone_append": false, 00:06:19.393 "compare": false, 00:06:19.393 "compare_and_write": false, 00:06:19.393 "abort": true, 00:06:19.393 "seek_hole": false, 00:06:19.393 "seek_data": false, 00:06:19.393 "copy": true, 00:06:19.393 "nvme_iov_md": false 00:06:19.393 }, 00:06:19.393 "memory_domains": [ 00:06:19.393 { 00:06:19.393 "dma_device_id": "system", 00:06:19.393 "dma_device_type": 1 00:06:19.393 }, 00:06:19.393 { 00:06:19.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.393 "dma_device_type": 2 00:06:19.393 } 00:06:19.393 ], 00:06:19.393 "driver_specific": {} 00:06:19.393 }, 00:06:19.393 { 00:06:19.393 "name": "Passthru0", 00:06:19.393 "aliases": [ 00:06:19.393 "b7713c3c-30fd-551b-9fd4-60015a9c5069" 00:06:19.393 ], 00:06:19.393 "product_name": "passthru", 00:06:19.393 "block_size": 512, 00:06:19.393 "num_blocks": 16384, 00:06:19.393 "uuid": "b7713c3c-30fd-551b-9fd4-60015a9c5069", 00:06:19.393 "assigned_rate_limits": { 00:06:19.393 "rw_ios_per_sec": 0, 00:06:19.393 "rw_mbytes_per_sec": 0, 00:06:19.393 "r_mbytes_per_sec": 0, 00:06:19.393 "w_mbytes_per_sec": 0 00:06:19.393 }, 00:06:19.393 "claimed": false, 00:06:19.393 "zoned": false, 00:06:19.393 "supported_io_types": { 00:06:19.393 "read": true, 00:06:19.393 "write": true, 00:06:19.393 "unmap": true, 00:06:19.393 "flush": true, 00:06:19.393 "reset": true, 00:06:19.393 "nvme_admin": false, 00:06:19.393 "nvme_io": false, 00:06:19.393 "nvme_io_md": false, 00:06:19.393 "write_zeroes": true, 00:06:19.393 "zcopy": true, 00:06:19.393 "get_zone_info": false, 00:06:19.393 "zone_management": false, 00:06:19.393 "zone_append": false, 00:06:19.394 "compare": false, 00:06:19.394 "compare_and_write": false, 00:06:19.394 "abort": true, 00:06:19.394 "seek_hole": false, 00:06:19.394 "seek_data": false, 00:06:19.394 "copy": true, 00:06:19.394 "nvme_iov_md": false 00:06:19.394 }, 00:06:19.394 "memory_domains": [ 00:06:19.394 { 00:06:19.394 "dma_device_id": "system", 00:06:19.394 "dma_device_type": 1 00:06:19.394 }, 00:06:19.394 { 00:06:19.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.394 "dma_device_type": 2 00:06:19.394 } 00:06:19.394 ], 00:06:19.394 "driver_specific": { 00:06:19.394 "passthru": { 00:06:19.394 "name": "Passthru0", 00:06:19.394 "base_bdev_name": "Malloc0" 00:06:19.394 } 00:06:19.394 } 00:06:19.394 } 00:06:19.394 ]' 00:06:19.394 10:34:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:19.394 10:34:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:19.394 10:34:06 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:19.394 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.394 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.394 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.394 10:34:06 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:19.394 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.394 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.394 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.394 10:34:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:19.394 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.394 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.394 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.394 10:34:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:19.394 10:34:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:19.394 10:34:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:19.394 00:06:19.394 real 0m0.217s 00:06:19.394 user 0m0.137s 00:06:19.394 sys 0m0.019s 00:06:19.394 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.394 10:34:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.394 ************************************ 00:06:19.394 END TEST rpc_integrity 00:06:19.394 ************************************ 00:06:19.394 10:34:06 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:19.394 10:34:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.394 10:34:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.394 10:34:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.394 ************************************ 00:06:19.394 START TEST rpc_plugins 00:06:19.394 ************************************ 00:06:19.394 10:34:06 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:19.394 10:34:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:19.394 10:34:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.394 10:34:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:19.394 10:34:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.394 10:34:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:19.394 10:34:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:19.394 10:34:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.394 10:34:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:19.394 10:34:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.394 10:34:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:19.394 { 00:06:19.394 "name": "Malloc1", 00:06:19.394 "aliases": [ 00:06:19.394 "ee025649-d0c1-42d9-9045-5631fecd177e" 00:06:19.394 ], 00:06:19.394 "product_name": "Malloc disk", 00:06:19.394 "block_size": 4096, 00:06:19.394 "num_blocks": 256, 00:06:19.394 "uuid": "ee025649-d0c1-42d9-9045-5631fecd177e", 00:06:19.394 "assigned_rate_limits": { 00:06:19.394 "rw_ios_per_sec": 0, 00:06:19.394 "rw_mbytes_per_sec": 0, 00:06:19.394 "r_mbytes_per_sec": 0, 00:06:19.394 "w_mbytes_per_sec": 0 00:06:19.394 }, 00:06:19.394 "claimed": false, 00:06:19.394 "zoned": false, 00:06:19.394 "supported_io_types": { 00:06:19.394 "read": true, 00:06:19.394 "write": true, 00:06:19.394 "unmap": true, 00:06:19.394 "flush": true, 00:06:19.394 "reset": true, 00:06:19.394 "nvme_admin": false, 00:06:19.394 "nvme_io": false, 00:06:19.394 "nvme_io_md": false, 00:06:19.394 "write_zeroes": true, 00:06:19.394 "zcopy": true, 00:06:19.394 "get_zone_info": false, 00:06:19.394 "zone_management": false, 00:06:19.394 "zone_append": false, 00:06:19.394 "compare": false, 00:06:19.394 "compare_and_write": false, 00:06:19.394 "abort": true, 00:06:19.394 "seek_hole": false, 00:06:19.394 "seek_data": false, 00:06:19.394 "copy": true, 00:06:19.394 "nvme_iov_md": false 00:06:19.394 }, 00:06:19.394 "memory_domains": [ 00:06:19.394 { 00:06:19.394 "dma_device_id": "system", 00:06:19.394 "dma_device_type": 1 00:06:19.394 }, 00:06:19.394 { 00:06:19.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.394 "dma_device_type": 2 00:06:19.394 } 00:06:19.394 ], 00:06:19.394 "driver_specific": {} 00:06:19.394 } 00:06:19.394 ]' 00:06:19.394 10:34:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:19.394 10:34:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:19.394 10:34:07 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:19.394 10:34:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.394 10:34:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:19.652 10:34:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.652 10:34:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:19.652 10:34:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.652 10:34:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:19.652 10:34:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.652 10:34:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:19.652 10:34:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:19.652 10:34:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:19.652 00:06:19.652 real 0m0.106s 00:06:19.652 user 0m0.070s 00:06:19.652 sys 0m0.008s 00:06:19.652 10:34:07 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.652 10:34:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:19.652 ************************************ 00:06:19.652 END TEST rpc_plugins 00:06:19.652 ************************************ 00:06:19.652 10:34:07 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:19.652 10:34:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.652 10:34:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.652 10:34:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.652 ************************************ 00:06:19.652 START TEST rpc_trace_cmd_test 00:06:19.652 ************************************ 00:06:19.652 10:34:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:19.652 10:34:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:19.652 10:34:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:19.652 10:34:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.652 10:34:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.652 10:34:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.652 10:34:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:19.652 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1216215", 00:06:19.652 "tpoint_group_mask": "0x8", 00:06:19.652 "iscsi_conn": { 00:06:19.652 "mask": "0x2", 00:06:19.652 "tpoint_mask": "0x0" 00:06:19.652 }, 00:06:19.652 "scsi": { 00:06:19.652 "mask": "0x4", 00:06:19.652 "tpoint_mask": "0x0" 00:06:19.652 }, 00:06:19.652 "bdev": { 00:06:19.652 "mask": "0x8", 00:06:19.652 "tpoint_mask": "0xffffffffffffffff" 00:06:19.652 }, 00:06:19.652 "nvmf_rdma": { 00:06:19.652 "mask": "0x10", 00:06:19.652 "tpoint_mask": "0x0" 00:06:19.652 }, 00:06:19.652 "nvmf_tcp": { 00:06:19.652 "mask": "0x20", 00:06:19.652 "tpoint_mask": "0x0" 00:06:19.652 }, 00:06:19.652 "ftl": { 00:06:19.652 "mask": "0x40", 00:06:19.652 "tpoint_mask": "0x0" 00:06:19.652 }, 00:06:19.652 "blobfs": { 00:06:19.652 "mask": "0x80", 00:06:19.652 "tpoint_mask": "0x0" 00:06:19.652 }, 00:06:19.652 "dsa": { 00:06:19.652 "mask": "0x200", 00:06:19.652 "tpoint_mask": "0x0" 00:06:19.652 }, 00:06:19.652 "thread": { 00:06:19.652 "mask": "0x400", 00:06:19.652 "tpoint_mask": "0x0" 00:06:19.652 }, 00:06:19.652 "nvme_pcie": { 00:06:19.652 "mask": "0x800", 00:06:19.652 "tpoint_mask": "0x0" 00:06:19.652 }, 00:06:19.652 "iaa": { 00:06:19.652 "mask": "0x1000", 00:06:19.652 "tpoint_mask": "0x0" 00:06:19.652 }, 00:06:19.652 "nvme_tcp": { 00:06:19.652 "mask": "0x2000", 00:06:19.652 "tpoint_mask": "0x0" 00:06:19.652 }, 00:06:19.652 "bdev_nvme": { 00:06:19.652 "mask": "0x4000", 00:06:19.652 "tpoint_mask": "0x0" 00:06:19.652 }, 00:06:19.652 "sock": { 00:06:19.652 "mask": "0x8000", 00:06:19.652 "tpoint_mask": "0x0" 00:06:19.652 }, 00:06:19.652 "blob": { 00:06:19.652 "mask": "0x10000", 00:06:19.652 "tpoint_mask": "0x0" 00:06:19.652 }, 00:06:19.652 "bdev_raid": { 00:06:19.652 "mask": "0x20000", 00:06:19.652 "tpoint_mask": "0x0" 00:06:19.652 }, 00:06:19.652 "scheduler": { 00:06:19.652 "mask": "0x40000", 00:06:19.652 "tpoint_mask": "0x0" 00:06:19.652 } 00:06:19.652 }' 00:06:19.652 10:34:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:19.652 10:34:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:19.652 10:34:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:19.652 10:34:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:19.652 10:34:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:19.652 10:34:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:19.652 10:34:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:19.652 10:34:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:19.652 10:34:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:19.911 10:34:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:19.911 00:06:19.911 real 0m0.181s 00:06:19.911 user 0m0.161s 00:06:19.911 sys 0m0.013s 00:06:19.911 10:34:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.911 10:34:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.911 ************************************ 00:06:19.911 END TEST rpc_trace_cmd_test 00:06:19.911 ************************************ 00:06:19.911 10:34:07 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:19.911 10:34:07 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:19.911 10:34:07 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:19.911 10:34:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.911 10:34:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.911 10:34:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.911 ************************************ 00:06:19.911 START TEST rpc_daemon_integrity 00:06:19.911 ************************************ 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:19.911 { 00:06:19.911 "name": "Malloc2", 00:06:19.911 "aliases": [ 00:06:19.911 "88e0fd32-10fd-4e55-a7b3-d17282fa0551" 00:06:19.911 ], 00:06:19.911 "product_name": "Malloc disk", 00:06:19.911 "block_size": 512, 00:06:19.911 "num_blocks": 16384, 00:06:19.911 "uuid": "88e0fd32-10fd-4e55-a7b3-d17282fa0551", 00:06:19.911 "assigned_rate_limits": { 00:06:19.911 "rw_ios_per_sec": 0, 00:06:19.911 "rw_mbytes_per_sec": 0, 00:06:19.911 "r_mbytes_per_sec": 0, 00:06:19.911 "w_mbytes_per_sec": 0 00:06:19.911 }, 00:06:19.911 "claimed": false, 00:06:19.911 "zoned": false, 00:06:19.911 "supported_io_types": { 00:06:19.911 "read": true, 00:06:19.911 "write": true, 00:06:19.911 "unmap": true, 00:06:19.911 "flush": true, 00:06:19.911 "reset": true, 00:06:19.911 "nvme_admin": false, 00:06:19.911 "nvme_io": false, 00:06:19.911 "nvme_io_md": false, 00:06:19.911 "write_zeroes": true, 00:06:19.911 "zcopy": true, 00:06:19.911 "get_zone_info": false, 00:06:19.911 "zone_management": false, 00:06:19.911 "zone_append": false, 00:06:19.911 "compare": false, 00:06:19.911 "compare_and_write": false, 00:06:19.911 "abort": true, 00:06:19.911 "seek_hole": false, 00:06:19.911 "seek_data": false, 00:06:19.911 "copy": true, 00:06:19.911 "nvme_iov_md": false 00:06:19.911 }, 00:06:19.911 "memory_domains": [ 00:06:19.911 { 00:06:19.911 "dma_device_id": "system", 00:06:19.911 "dma_device_type": 1 00:06:19.911 }, 00:06:19.911 { 00:06:19.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.911 "dma_device_type": 2 00:06:19.911 } 00:06:19.911 ], 00:06:19.911 "driver_specific": {} 00:06:19.911 } 00:06:19.911 ]' 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.911 [2024-11-19 10:34:07.440046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:19.911 [2024-11-19 10:34:07.440084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:19.911 [2024-11-19 10:34:07.440120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1781200 00:06:19.911 [2024-11-19 10:34:07.440133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:19.911 [2024-11-19 10:34:07.441337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:19.911 [2024-11-19 10:34:07.441370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:19.911 Passthru0 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.911 10:34:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:19.911 { 00:06:19.911 "name": "Malloc2", 00:06:19.911 "aliases": [ 00:06:19.911 "88e0fd32-10fd-4e55-a7b3-d17282fa0551" 00:06:19.911 ], 00:06:19.911 "product_name": "Malloc disk", 00:06:19.911 "block_size": 512, 00:06:19.911 "num_blocks": 16384, 00:06:19.911 "uuid": "88e0fd32-10fd-4e55-a7b3-d17282fa0551", 00:06:19.911 "assigned_rate_limits": { 00:06:19.911 "rw_ios_per_sec": 0, 00:06:19.911 "rw_mbytes_per_sec": 0, 00:06:19.911 "r_mbytes_per_sec": 0, 00:06:19.911 "w_mbytes_per_sec": 0 00:06:19.911 }, 00:06:19.911 "claimed": true, 00:06:19.911 "claim_type": "exclusive_write", 00:06:19.911 "zoned": false, 00:06:19.911 "supported_io_types": { 00:06:19.911 "read": true, 00:06:19.911 "write": true, 00:06:19.911 "unmap": true, 00:06:19.911 "flush": true, 00:06:19.911 "reset": true, 00:06:19.911 "nvme_admin": false, 00:06:19.911 "nvme_io": false, 00:06:19.911 "nvme_io_md": false, 00:06:19.911 "write_zeroes": true, 00:06:19.911 "zcopy": true, 00:06:19.911 "get_zone_info": false, 00:06:19.911 "zone_management": false, 00:06:19.911 "zone_append": false, 00:06:19.911 "compare": false, 00:06:19.912 "compare_and_write": false, 00:06:19.912 "abort": true, 00:06:19.912 "seek_hole": false, 00:06:19.912 "seek_data": false, 00:06:19.912 "copy": true, 00:06:19.912 "nvme_iov_md": false 00:06:19.912 }, 00:06:19.912 "memory_domains": [ 00:06:19.912 { 00:06:19.912 "dma_device_id": "system", 00:06:19.912 "dma_device_type": 1 00:06:19.912 }, 00:06:19.912 { 00:06:19.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.912 "dma_device_type": 2 00:06:19.912 } 00:06:19.912 ], 00:06:19.912 "driver_specific": {} 00:06:19.912 }, 00:06:19.912 { 00:06:19.912 "name": "Passthru0", 00:06:19.912 "aliases": [ 00:06:19.912 "e94f414b-12de-5ca5-8179-fc6698e77eca" 00:06:19.912 ], 00:06:19.912 "product_name": "passthru", 00:06:19.912 "block_size": 512, 00:06:19.912 "num_blocks": 16384, 00:06:19.912 "uuid": "e94f414b-12de-5ca5-8179-fc6698e77eca", 00:06:19.912 "assigned_rate_limits": { 00:06:19.912 "rw_ios_per_sec": 0, 00:06:19.912 "rw_mbytes_per_sec": 0, 00:06:19.912 "r_mbytes_per_sec": 0, 00:06:19.912 "w_mbytes_per_sec": 0 00:06:19.912 }, 00:06:19.912 "claimed": false, 00:06:19.912 "zoned": false, 00:06:19.912 "supported_io_types": { 00:06:19.912 "read": true, 00:06:19.912 "write": true, 00:06:19.912 "unmap": true, 00:06:19.912 "flush": true, 00:06:19.912 "reset": true, 00:06:19.912 "nvme_admin": false, 00:06:19.912 "nvme_io": false, 00:06:19.912 "nvme_io_md": false, 00:06:19.912 "write_zeroes": true, 00:06:19.912 "zcopy": true, 00:06:19.912 "get_zone_info": false, 00:06:19.912 "zone_management": false, 00:06:19.912 "zone_append": false, 00:06:19.912 "compare": false, 00:06:19.912 "compare_and_write": false, 00:06:19.912 "abort": true, 00:06:19.912 "seek_hole": false, 00:06:19.912 "seek_data": false, 00:06:19.912 "copy": true, 00:06:19.912 "nvme_iov_md": false 00:06:19.912 }, 00:06:19.912 "memory_domains": [ 00:06:19.912 { 00:06:19.912 "dma_device_id": "system", 00:06:19.912 "dma_device_type": 1 00:06:19.912 }, 00:06:19.912 { 00:06:19.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.912 "dma_device_type": 2 00:06:19.912 } 00:06:19.912 ], 00:06:19.912 "driver_specific": { 00:06:19.912 "passthru": { 00:06:19.912 "name": "Passthru0", 00:06:19.912 "base_bdev_name": "Malloc2" 00:06:19.912 } 00:06:19.912 } 00:06:19.912 } 00:06:19.912 ]' 00:06:19.912 10:34:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:19.912 10:34:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:19.912 10:34:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:19.912 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.912 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.912 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.912 10:34:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:19.912 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.912 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.912 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.912 10:34:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:19.912 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.912 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.912 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.912 10:34:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:19.912 10:34:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:20.170 10:34:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:20.170 00:06:20.170 real 0m0.213s 00:06:20.170 user 0m0.139s 00:06:20.170 sys 0m0.019s 00:06:20.170 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.170 10:34:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.170 ************************************ 00:06:20.170 END TEST rpc_daemon_integrity 00:06:20.170 ************************************ 00:06:20.170 10:34:07 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:20.170 10:34:07 rpc -- rpc/rpc.sh@84 -- # killprocess 1216215 00:06:20.170 10:34:07 rpc -- common/autotest_common.sh@954 -- # '[' -z 1216215 ']' 00:06:20.170 10:34:07 rpc -- common/autotest_common.sh@958 -- # kill -0 1216215 00:06:20.170 10:34:07 rpc -- common/autotest_common.sh@959 -- # uname 00:06:20.170 10:34:07 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.170 10:34:07 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1216215 00:06:20.170 10:34:07 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.170 10:34:07 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.170 10:34:07 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1216215' 00:06:20.170 killing process with pid 1216215 00:06:20.170 10:34:07 rpc -- common/autotest_common.sh@973 -- # kill 1216215 00:06:20.170 10:34:07 rpc -- common/autotest_common.sh@978 -- # wait 1216215 00:06:20.430 00:06:20.430 real 0m1.946s 00:06:20.430 user 0m2.393s 00:06:20.430 sys 0m0.613s 00:06:20.430 10:34:08 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.430 10:34:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.430 ************************************ 00:06:20.430 END TEST rpc 00:06:20.430 ************************************ 00:06:20.430 10:34:08 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:20.430 10:34:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.430 10:34:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.430 10:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:20.689 ************************************ 00:06:20.689 START TEST skip_rpc 00:06:20.689 ************************************ 00:06:20.689 10:34:08 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:20.689 * Looking for test storage... 00:06:20.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:20.689 10:34:08 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:20.689 10:34:08 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:20.689 10:34:08 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:20.689 10:34:08 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.689 10:34:08 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:20.689 10:34:08 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.689 10:34:08 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:20.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.689 --rc genhtml_branch_coverage=1 00:06:20.689 --rc genhtml_function_coverage=1 00:06:20.689 --rc genhtml_legend=1 00:06:20.689 --rc geninfo_all_blocks=1 00:06:20.689 --rc geninfo_unexecuted_blocks=1 00:06:20.689 00:06:20.689 ' 00:06:20.689 10:34:08 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:20.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.689 --rc genhtml_branch_coverage=1 00:06:20.689 --rc genhtml_function_coverage=1 00:06:20.689 --rc genhtml_legend=1 00:06:20.689 --rc geninfo_all_blocks=1 00:06:20.689 --rc geninfo_unexecuted_blocks=1 00:06:20.689 00:06:20.689 ' 00:06:20.689 10:34:08 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:20.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.689 --rc genhtml_branch_coverage=1 00:06:20.689 --rc genhtml_function_coverage=1 00:06:20.689 --rc genhtml_legend=1 00:06:20.689 --rc geninfo_all_blocks=1 00:06:20.689 --rc geninfo_unexecuted_blocks=1 00:06:20.689 00:06:20.689 ' 00:06:20.689 10:34:08 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:20.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.689 --rc genhtml_branch_coverage=1 00:06:20.689 --rc genhtml_function_coverage=1 00:06:20.689 --rc genhtml_legend=1 00:06:20.689 --rc geninfo_all_blocks=1 00:06:20.689 --rc geninfo_unexecuted_blocks=1 00:06:20.689 00:06:20.689 ' 00:06:20.689 10:34:08 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:20.689 10:34:08 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:20.689 10:34:08 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:20.689 10:34:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.689 10:34:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.689 10:34:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.689 ************************************ 00:06:20.689 START TEST skip_rpc 00:06:20.689 ************************************ 00:06:20.689 10:34:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:20.689 10:34:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1216576 00:06:20.689 10:34:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:20.689 10:34:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.689 10:34:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:20.689 [2024-11-19 10:34:08.297413] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:20.690 [2024-11-19 10:34:08.297489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1216576 ] 00:06:20.948 [2024-11-19 10:34:08.360432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.948 [2024-11-19 10:34:08.418807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1216576 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1216576 ']' 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1216576 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1216576 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1216576' 00:06:26.212 killing process with pid 1216576 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1216576 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1216576 00:06:26.212 00:06:26.212 real 0m5.453s 00:06:26.212 user 0m5.138s 00:06:26.212 sys 0m0.326s 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.212 10:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.212 ************************************ 00:06:26.212 END TEST skip_rpc 00:06:26.212 ************************************ 00:06:26.213 10:34:13 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:26.213 10:34:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.213 10:34:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.213 10:34:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.213 ************************************ 00:06:26.213 START TEST skip_rpc_with_json 00:06:26.213 ************************************ 00:06:26.213 10:34:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:26.213 10:34:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:26.213 10:34:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1217269 00:06:26.213 10:34:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.213 10:34:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.213 10:34:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1217269 00:06:26.213 10:34:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1217269 ']' 00:06:26.213 10:34:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.213 10:34:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.213 10:34:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.213 10:34:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.213 10:34:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:26.213 [2024-11-19 10:34:13.804369] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:26.213 [2024-11-19 10:34:13.804462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1217269 ] 00:06:26.471 [2024-11-19 10:34:13.871219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.471 [2024-11-19 10:34:13.930762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.729 10:34:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.729 10:34:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:26.729 10:34:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:26.729 10:34:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.729 10:34:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:26.730 [2024-11-19 10:34:14.202101] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:26.730 request: 00:06:26.730 { 00:06:26.730 "trtype": "tcp", 00:06:26.730 "method": "nvmf_get_transports", 00:06:26.730 "req_id": 1 00:06:26.730 } 00:06:26.730 Got JSON-RPC error response 00:06:26.730 response: 00:06:26.730 { 00:06:26.730 "code": -19, 00:06:26.730 "message": "No such device" 00:06:26.730 } 00:06:26.730 10:34:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:26.730 10:34:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:26.730 10:34:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.730 10:34:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:26.730 [2024-11-19 10:34:14.210209] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.730 10:34:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.730 10:34:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:26.730 10:34:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.730 10:34:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:26.988 10:34:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.988 10:34:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:26.988 { 00:06:26.988 "subsystems": [ 00:06:26.988 { 00:06:26.988 "subsystem": "fsdev", 00:06:26.988 "config": [ 00:06:26.988 { 00:06:26.988 "method": "fsdev_set_opts", 00:06:26.988 "params": { 00:06:26.988 "fsdev_io_pool_size": 65535, 00:06:26.988 "fsdev_io_cache_size": 256 00:06:26.988 } 00:06:26.988 } 00:06:26.988 ] 00:06:26.988 }, 00:06:26.988 { 00:06:26.988 "subsystem": "vfio_user_target", 00:06:26.988 "config": null 00:06:26.988 }, 00:06:26.988 { 00:06:26.988 "subsystem": "keyring", 00:06:26.988 "config": [] 00:06:26.988 }, 00:06:26.988 { 00:06:26.988 "subsystem": "iobuf", 00:06:26.988 "config": [ 00:06:26.988 { 00:06:26.988 "method": "iobuf_set_options", 00:06:26.988 "params": { 00:06:26.988 "small_pool_count": 8192, 00:06:26.988 "large_pool_count": 1024, 00:06:26.988 "small_bufsize": 8192, 00:06:26.988 "large_bufsize": 135168, 00:06:26.988 "enable_numa": false 00:06:26.988 } 00:06:26.988 } 00:06:26.988 ] 00:06:26.988 }, 00:06:26.988 { 00:06:26.988 "subsystem": "sock", 00:06:26.988 "config": [ 00:06:26.988 { 00:06:26.988 "method": "sock_set_default_impl", 00:06:26.988 "params": { 00:06:26.988 "impl_name": "posix" 00:06:26.988 } 00:06:26.988 }, 00:06:26.988 { 00:06:26.988 "method": "sock_impl_set_options", 00:06:26.988 "params": { 00:06:26.988 "impl_name": "ssl", 00:06:26.988 "recv_buf_size": 4096, 00:06:26.988 "send_buf_size": 4096, 00:06:26.988 "enable_recv_pipe": true, 00:06:26.988 "enable_quickack": false, 00:06:26.988 "enable_placement_id": 0, 00:06:26.988 "enable_zerocopy_send_server": true, 00:06:26.988 "enable_zerocopy_send_client": false, 00:06:26.988 "zerocopy_threshold": 0, 00:06:26.988 "tls_version": 0, 00:06:26.988 "enable_ktls": false 00:06:26.988 } 00:06:26.988 }, 00:06:26.988 { 00:06:26.988 "method": "sock_impl_set_options", 00:06:26.988 "params": { 00:06:26.988 "impl_name": "posix", 00:06:26.988 "recv_buf_size": 2097152, 00:06:26.988 "send_buf_size": 2097152, 00:06:26.988 "enable_recv_pipe": true, 00:06:26.988 "enable_quickack": false, 00:06:26.988 "enable_placement_id": 0, 00:06:26.988 "enable_zerocopy_send_server": true, 00:06:26.988 "enable_zerocopy_send_client": false, 00:06:26.988 "zerocopy_threshold": 0, 00:06:26.988 "tls_version": 0, 00:06:26.988 "enable_ktls": false 00:06:26.988 } 00:06:26.988 } 00:06:26.988 ] 00:06:26.988 }, 00:06:26.988 { 00:06:26.988 "subsystem": "vmd", 00:06:26.988 "config": [] 00:06:26.988 }, 00:06:26.988 { 00:06:26.988 "subsystem": "accel", 00:06:26.988 "config": [ 00:06:26.988 { 00:06:26.988 "method": "accel_set_options", 00:06:26.988 "params": { 00:06:26.988 "small_cache_size": 128, 00:06:26.988 "large_cache_size": 16, 00:06:26.988 "task_count": 2048, 00:06:26.988 "sequence_count": 2048, 00:06:26.988 "buf_count": 2048 00:06:26.988 } 00:06:26.988 } 00:06:26.988 ] 00:06:26.988 }, 00:06:26.988 { 00:06:26.988 "subsystem": "bdev", 00:06:26.988 "config": [ 00:06:26.988 { 00:06:26.988 "method": "bdev_set_options", 00:06:26.988 "params": { 00:06:26.988 "bdev_io_pool_size": 65535, 00:06:26.988 "bdev_io_cache_size": 256, 00:06:26.988 "bdev_auto_examine": true, 00:06:26.988 "iobuf_small_cache_size": 128, 00:06:26.988 "iobuf_large_cache_size": 16 00:06:26.988 } 00:06:26.988 }, 00:06:26.988 { 00:06:26.988 "method": "bdev_raid_set_options", 00:06:26.988 "params": { 00:06:26.988 "process_window_size_kb": 1024, 00:06:26.988 "process_max_bandwidth_mb_sec": 0 00:06:26.988 } 00:06:26.988 }, 00:06:26.988 { 00:06:26.988 "method": "bdev_iscsi_set_options", 00:06:26.988 "params": { 00:06:26.988 "timeout_sec": 30 00:06:26.988 } 00:06:26.988 }, 00:06:26.988 { 00:06:26.988 "method": "bdev_nvme_set_options", 00:06:26.988 "params": { 00:06:26.988 "action_on_timeout": "none", 00:06:26.988 "timeout_us": 0, 00:06:26.988 "timeout_admin_us": 0, 00:06:26.988 "keep_alive_timeout_ms": 10000, 00:06:26.988 "arbitration_burst": 0, 00:06:26.988 "low_priority_weight": 0, 00:06:26.988 "medium_priority_weight": 0, 00:06:26.988 "high_priority_weight": 0, 00:06:26.988 "nvme_adminq_poll_period_us": 10000, 00:06:26.988 "nvme_ioq_poll_period_us": 0, 00:06:26.988 "io_queue_requests": 0, 00:06:26.988 "delay_cmd_submit": true, 00:06:26.988 "transport_retry_count": 4, 00:06:26.988 "bdev_retry_count": 3, 00:06:26.988 "transport_ack_timeout": 0, 00:06:26.988 "ctrlr_loss_timeout_sec": 0, 00:06:26.988 "reconnect_delay_sec": 0, 00:06:26.988 "fast_io_fail_timeout_sec": 0, 00:06:26.988 "disable_auto_failback": false, 00:06:26.988 "generate_uuids": false, 00:06:26.988 "transport_tos": 0, 00:06:26.988 "nvme_error_stat": false, 00:06:26.988 "rdma_srq_size": 0, 00:06:26.988 "io_path_stat": false, 00:06:26.988 "allow_accel_sequence": false, 00:06:26.988 "rdma_max_cq_size": 0, 00:06:26.988 "rdma_cm_event_timeout_ms": 0, 00:06:26.988 "dhchap_digests": [ 00:06:26.988 "sha256", 00:06:26.988 "sha384", 00:06:26.988 "sha512" 00:06:26.988 ], 00:06:26.988 "dhchap_dhgroups": [ 00:06:26.988 "null", 00:06:26.988 "ffdhe2048", 00:06:26.988 "ffdhe3072", 00:06:26.988 "ffdhe4096", 00:06:26.989 "ffdhe6144", 00:06:26.989 "ffdhe8192" 00:06:26.989 ] 00:06:26.989 } 00:06:26.989 }, 00:06:26.989 { 00:06:26.989 "method": "bdev_nvme_set_hotplug", 00:06:26.989 "params": { 00:06:26.989 "period_us": 100000, 00:06:26.989 "enable": false 00:06:26.989 } 00:06:26.989 }, 00:06:26.989 { 00:06:26.989 "method": "bdev_wait_for_examine" 00:06:26.989 } 00:06:26.989 ] 00:06:26.989 }, 00:06:26.989 { 00:06:26.989 "subsystem": "scsi", 00:06:26.989 "config": null 00:06:26.989 }, 00:06:26.989 { 00:06:26.989 "subsystem": "scheduler", 00:06:26.989 "config": [ 00:06:26.989 { 00:06:26.989 "method": "framework_set_scheduler", 00:06:26.989 "params": { 00:06:26.989 "name": "static" 00:06:26.989 } 00:06:26.989 } 00:06:26.989 ] 00:06:26.989 }, 00:06:26.989 { 00:06:26.989 "subsystem": "vhost_scsi", 00:06:26.989 "config": [] 00:06:26.989 }, 00:06:26.989 { 00:06:26.989 "subsystem": "vhost_blk", 00:06:26.989 "config": [] 00:06:26.989 }, 00:06:26.989 { 00:06:26.989 "subsystem": "ublk", 00:06:26.989 "config": [] 00:06:26.989 }, 00:06:26.989 { 00:06:26.989 "subsystem": "nbd", 00:06:26.989 "config": [] 00:06:26.989 }, 00:06:26.989 { 00:06:26.989 "subsystem": "nvmf", 00:06:26.989 "config": [ 00:06:26.989 { 00:06:26.989 "method": "nvmf_set_config", 00:06:26.989 "params": { 00:06:26.989 "discovery_filter": "match_any", 00:06:26.989 "admin_cmd_passthru": { 00:06:26.989 "identify_ctrlr": false 00:06:26.989 }, 00:06:26.989 "dhchap_digests": [ 00:06:26.989 "sha256", 00:06:26.989 "sha384", 00:06:26.989 "sha512" 00:06:26.989 ], 00:06:26.989 "dhchap_dhgroups": [ 00:06:26.989 "null", 00:06:26.989 "ffdhe2048", 00:06:26.989 "ffdhe3072", 00:06:26.989 "ffdhe4096", 00:06:26.989 "ffdhe6144", 00:06:26.989 "ffdhe8192" 00:06:26.989 ] 00:06:26.989 } 00:06:26.989 }, 00:06:26.989 { 00:06:26.989 "method": "nvmf_set_max_subsystems", 00:06:26.989 "params": { 00:06:26.989 "max_subsystems": 1024 00:06:26.989 } 00:06:26.989 }, 00:06:26.989 { 00:06:26.989 "method": "nvmf_set_crdt", 00:06:26.989 "params": { 00:06:26.989 "crdt1": 0, 00:06:26.989 "crdt2": 0, 00:06:26.989 "crdt3": 0 00:06:26.989 } 00:06:26.989 }, 00:06:26.989 { 00:06:26.989 "method": "nvmf_create_transport", 00:06:26.989 "params": { 00:06:26.989 "trtype": "TCP", 00:06:26.989 "max_queue_depth": 128, 00:06:26.989 "max_io_qpairs_per_ctrlr": 127, 00:06:26.989 "in_capsule_data_size": 4096, 00:06:26.989 "max_io_size": 131072, 00:06:26.989 "io_unit_size": 131072, 00:06:26.989 "max_aq_depth": 128, 00:06:26.989 "num_shared_buffers": 511, 00:06:26.989 "buf_cache_size": 4294967295, 00:06:26.989 "dif_insert_or_strip": false, 00:06:26.989 "zcopy": false, 00:06:26.989 "c2h_success": true, 00:06:26.989 "sock_priority": 0, 00:06:26.989 "abort_timeout_sec": 1, 00:06:26.989 "ack_timeout": 0, 00:06:26.989 "data_wr_pool_size": 0 00:06:26.989 } 00:06:26.989 } 00:06:26.989 ] 00:06:26.989 }, 00:06:26.989 { 00:06:26.989 "subsystem": "iscsi", 00:06:26.989 "config": [ 00:06:26.989 { 00:06:26.989 "method": "iscsi_set_options", 00:06:26.989 "params": { 00:06:26.989 "node_base": "iqn.2016-06.io.spdk", 00:06:26.989 "max_sessions": 128, 00:06:26.989 "max_connections_per_session": 2, 00:06:26.989 "max_queue_depth": 64, 00:06:26.989 "default_time2wait": 2, 00:06:26.989 "default_time2retain": 20, 00:06:26.989 "first_burst_length": 8192, 00:06:26.989 "immediate_data": true, 00:06:26.989 "allow_duplicated_isid": false, 00:06:26.989 "error_recovery_level": 0, 00:06:26.989 "nop_timeout": 60, 00:06:26.989 "nop_in_interval": 30, 00:06:26.989 "disable_chap": false, 00:06:26.989 "require_chap": false, 00:06:26.989 "mutual_chap": false, 00:06:26.989 "chap_group": 0, 00:06:26.989 "max_large_datain_per_connection": 64, 00:06:26.989 "max_r2t_per_connection": 4, 00:06:26.989 "pdu_pool_size": 36864, 00:06:26.989 "immediate_data_pool_size": 16384, 00:06:26.989 "data_out_pool_size": 2048 00:06:26.989 } 00:06:26.989 } 00:06:26.989 ] 00:06:26.989 } 00:06:26.989 ] 00:06:26.989 } 00:06:26.989 10:34:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:26.989 10:34:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1217269 00:06:26.989 10:34:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1217269 ']' 00:06:26.989 10:34:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1217269 00:06:26.989 10:34:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:26.989 10:34:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.989 10:34:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1217269 00:06:26.989 10:34:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.989 10:34:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.989 10:34:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1217269' 00:06:26.989 killing process with pid 1217269 00:06:26.989 10:34:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1217269 00:06:26.989 10:34:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1217269 00:06:27.247 10:34:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1217409 00:06:27.247 10:34:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:27.247 10:34:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:32.510 10:34:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1217409 00:06:32.510 10:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1217409 ']' 00:06:32.510 10:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1217409 00:06:32.510 10:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:32.510 10:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.510 10:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1217409 00:06:32.510 10:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.510 10:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.510 10:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1217409' 00:06:32.510 killing process with pid 1217409 00:06:32.510 10:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1217409 00:06:32.510 10:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1217409 00:06:32.769 10:34:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:32.769 10:34:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:32.769 00:06:32.769 real 0m6.542s 00:06:32.769 user 0m6.194s 00:06:32.769 sys 0m0.679s 00:06:32.769 10:34:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.769 10:34:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:32.769 ************************************ 00:06:32.769 END TEST skip_rpc_with_json 00:06:32.769 ************************************ 00:06:32.769 10:34:20 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:32.769 10:34:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.769 10:34:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.769 10:34:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.769 ************************************ 00:06:32.769 START TEST skip_rpc_with_delay 00:06:32.769 ************************************ 00:06:32.769 10:34:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:32.769 10:34:20 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:32.769 10:34:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:32.769 10:34:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:32.769 10:34:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:32.769 10:34:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.769 10:34:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:32.769 10:34:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.769 10:34:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:32.769 10:34:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.769 10:34:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:32.769 10:34:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:32.769 10:34:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:33.026 [2024-11-19 10:34:20.401693] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:33.026 10:34:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:33.026 10:34:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:33.026 10:34:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:33.026 10:34:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:33.026 00:06:33.026 real 0m0.074s 00:06:33.026 user 0m0.048s 00:06:33.026 sys 0m0.026s 00:06:33.026 10:34:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.026 10:34:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:33.026 ************************************ 00:06:33.026 END TEST skip_rpc_with_delay 00:06:33.026 ************************************ 00:06:33.026 10:34:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:33.026 10:34:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:33.026 10:34:20 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:33.026 10:34:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.026 10:34:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.026 10:34:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.026 ************************************ 00:06:33.026 START TEST exit_on_failed_rpc_init 00:06:33.026 ************************************ 00:06:33.026 10:34:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:33.026 10:34:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1218121 00:06:33.026 10:34:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.026 10:34:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1218121 00:06:33.026 10:34:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1218121 ']' 00:06:33.026 10:34:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.026 10:34:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.026 10:34:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.026 10:34:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.026 10:34:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:33.027 [2024-11-19 10:34:20.527275] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:33.027 [2024-11-19 10:34:20.527396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1218121 ] 00:06:33.027 [2024-11-19 10:34:20.596858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.284 [2024-11-19 10:34:20.656930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.543 10:34:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.543 10:34:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:33.543 10:34:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:33.543 10:34:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:33.543 10:34:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:33.543 10:34:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:33.543 10:34:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:33.543 10:34:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.543 10:34:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:33.543 10:34:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.543 10:34:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:33.543 10:34:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.543 10:34:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:33.543 10:34:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:33.543 10:34:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:33.543 [2024-11-19 10:34:20.976867] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:33.543 [2024-11-19 10:34:20.976949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1218239 ] 00:06:33.543 [2024-11-19 10:34:21.040800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.543 [2024-11-19 10:34:21.100300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.543 [2024-11-19 10:34:21.100429] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:33.543 [2024-11-19 10:34:21.100450] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:33.543 [2024-11-19 10:34:21.100462] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.801 10:34:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:33.801 10:34:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:33.801 10:34:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:33.801 10:34:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:33.801 10:34:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:33.801 10:34:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:33.801 10:34:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:33.801 10:34:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1218121 00:06:33.801 10:34:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1218121 ']' 00:06:33.801 10:34:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1218121 00:06:33.801 10:34:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:33.801 10:34:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.801 10:34:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1218121 00:06:33.801 10:34:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.801 10:34:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.801 10:34:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1218121' 00:06:33.801 killing process with pid 1218121 00:06:33.801 10:34:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1218121 00:06:33.801 10:34:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1218121 00:06:34.111 00:06:34.111 real 0m1.166s 00:06:34.111 user 0m1.286s 00:06:34.111 sys 0m0.438s 00:06:34.111 10:34:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.111 10:34:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:34.111 ************************************ 00:06:34.111 END TEST exit_on_failed_rpc_init 00:06:34.111 ************************************ 00:06:34.111 10:34:21 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:34.111 00:06:34.111 real 0m13.594s 00:06:34.111 user 0m12.854s 00:06:34.111 sys 0m1.659s 00:06:34.111 10:34:21 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.111 10:34:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.111 ************************************ 00:06:34.111 END TEST skip_rpc 00:06:34.111 ************************************ 00:06:34.111 10:34:21 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:34.111 10:34:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.111 10:34:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.111 10:34:21 -- common/autotest_common.sh@10 -- # set +x 00:06:34.395 ************************************ 00:06:34.395 START TEST rpc_client 00:06:34.395 ************************************ 00:06:34.395 10:34:21 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:34.395 * Looking for test storage... 00:06:34.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:34.395 10:34:21 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:34.395 10:34:21 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:34.395 10:34:21 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:34.395 10:34:21 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.395 10:34:21 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:34.395 10:34:21 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.395 10:34:21 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:34.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.395 --rc genhtml_branch_coverage=1 00:06:34.395 --rc genhtml_function_coverage=1 00:06:34.395 --rc genhtml_legend=1 00:06:34.395 --rc geninfo_all_blocks=1 00:06:34.395 --rc geninfo_unexecuted_blocks=1 00:06:34.395 00:06:34.395 ' 00:06:34.395 10:34:21 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:34.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.395 --rc genhtml_branch_coverage=1 00:06:34.395 --rc genhtml_function_coverage=1 00:06:34.395 --rc genhtml_legend=1 00:06:34.395 --rc geninfo_all_blocks=1 00:06:34.395 --rc geninfo_unexecuted_blocks=1 00:06:34.395 00:06:34.395 ' 00:06:34.395 10:34:21 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:34.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.395 --rc genhtml_branch_coverage=1 00:06:34.395 --rc genhtml_function_coverage=1 00:06:34.395 --rc genhtml_legend=1 00:06:34.395 --rc geninfo_all_blocks=1 00:06:34.395 --rc geninfo_unexecuted_blocks=1 00:06:34.395 00:06:34.395 ' 00:06:34.395 10:34:21 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:34.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.395 --rc genhtml_branch_coverage=1 00:06:34.395 --rc genhtml_function_coverage=1 00:06:34.395 --rc genhtml_legend=1 00:06:34.395 --rc geninfo_all_blocks=1 00:06:34.395 --rc geninfo_unexecuted_blocks=1 00:06:34.395 00:06:34.395 ' 00:06:34.395 10:34:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:34.395 OK 00:06:34.395 10:34:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:34.395 00:06:34.395 real 0m0.164s 00:06:34.395 user 0m0.111s 00:06:34.395 sys 0m0.064s 00:06:34.395 10:34:21 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.395 10:34:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:34.395 ************************************ 00:06:34.395 END TEST rpc_client 00:06:34.395 ************************************ 00:06:34.396 10:34:21 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:34.396 10:34:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.396 10:34:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.396 10:34:21 -- common/autotest_common.sh@10 -- # set +x 00:06:34.396 ************************************ 00:06:34.396 START TEST json_config 00:06:34.396 ************************************ 00:06:34.396 10:34:21 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:34.396 10:34:21 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:34.396 10:34:21 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:34.396 10:34:21 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:34.655 10:34:22 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:34.655 10:34:22 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.655 10:34:22 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.655 10:34:22 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.655 10:34:22 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.655 10:34:22 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.655 10:34:22 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.655 10:34:22 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.655 10:34:22 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.655 10:34:22 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.655 10:34:22 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.655 10:34:22 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.655 10:34:22 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:34.655 10:34:22 json_config -- scripts/common.sh@345 -- # : 1 00:06:34.655 10:34:22 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.655 10:34:22 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.655 10:34:22 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:34.655 10:34:22 json_config -- scripts/common.sh@353 -- # local d=1 00:06:34.655 10:34:22 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.655 10:34:22 json_config -- scripts/common.sh@355 -- # echo 1 00:06:34.655 10:34:22 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.655 10:34:22 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:34.655 10:34:22 json_config -- scripts/common.sh@353 -- # local d=2 00:06:34.655 10:34:22 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.655 10:34:22 json_config -- scripts/common.sh@355 -- # echo 2 00:06:34.655 10:34:22 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.655 10:34:22 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.655 10:34:22 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.655 10:34:22 json_config -- scripts/common.sh@368 -- # return 0 00:06:34.655 10:34:22 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.655 10:34:22 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:34.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.655 --rc genhtml_branch_coverage=1 00:06:34.655 --rc genhtml_function_coverage=1 00:06:34.655 --rc genhtml_legend=1 00:06:34.655 --rc geninfo_all_blocks=1 00:06:34.655 --rc geninfo_unexecuted_blocks=1 00:06:34.655 00:06:34.655 ' 00:06:34.655 10:34:22 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:34.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.655 --rc genhtml_branch_coverage=1 00:06:34.655 --rc genhtml_function_coverage=1 00:06:34.656 --rc genhtml_legend=1 00:06:34.656 --rc geninfo_all_blocks=1 00:06:34.656 --rc geninfo_unexecuted_blocks=1 00:06:34.656 00:06:34.656 ' 00:06:34.656 10:34:22 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:34.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.656 --rc genhtml_branch_coverage=1 00:06:34.656 --rc genhtml_function_coverage=1 00:06:34.656 --rc genhtml_legend=1 00:06:34.656 --rc geninfo_all_blocks=1 00:06:34.656 --rc geninfo_unexecuted_blocks=1 00:06:34.656 00:06:34.656 ' 00:06:34.656 10:34:22 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:34.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.656 --rc genhtml_branch_coverage=1 00:06:34.656 --rc genhtml_function_coverage=1 00:06:34.656 --rc genhtml_legend=1 00:06:34.656 --rc geninfo_all_blocks=1 00:06:34.656 --rc geninfo_unexecuted_blocks=1 00:06:34.656 00:06:34.656 ' 00:06:34.656 10:34:22 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:34.656 10:34:22 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:34.656 10:34:22 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:34.656 10:34:22 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:34.656 10:34:22 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:34.656 10:34:22 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.656 10:34:22 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.656 10:34:22 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.656 10:34:22 json_config -- paths/export.sh@5 -- # export PATH 00:06:34.656 10:34:22 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@51 -- # : 0 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:34.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:34.656 10:34:22 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:34.656 10:34:22 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:34.656 10:34:22 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:34.656 10:34:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:34.656 10:34:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:34.656 10:34:22 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:34.656 10:34:22 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:34.656 10:34:22 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:34.656 10:34:22 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:34.656 10:34:22 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:34.656 10:34:22 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:34.656 10:34:22 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:34.656 10:34:22 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:34.656 10:34:22 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:34.656 10:34:22 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:34.656 10:34:22 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:34.656 10:34:22 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:34.656 INFO: JSON configuration test init 00:06:34.656 10:34:22 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:34.656 10:34:22 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:34.656 10:34:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:34.656 10:34:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.656 10:34:22 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:34.656 10:34:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:34.656 10:34:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.656 10:34:22 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:34.656 10:34:22 json_config -- json_config/common.sh@9 -- # local app=target 00:06:34.656 10:34:22 json_config -- json_config/common.sh@10 -- # shift 00:06:34.656 10:34:22 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:34.656 10:34:22 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:34.656 10:34:22 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:34.656 10:34:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:34.656 10:34:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:34.656 10:34:22 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1218514 00:06:34.656 10:34:22 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:34.656 10:34:22 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:34.656 Waiting for target to run... 00:06:34.656 10:34:22 json_config -- json_config/common.sh@25 -- # waitforlisten 1218514 /var/tmp/spdk_tgt.sock 00:06:34.656 10:34:22 json_config -- common/autotest_common.sh@835 -- # '[' -z 1218514 ']' 00:06:34.656 10:34:22 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:34.656 10:34:22 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.656 10:34:22 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:34.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:34.656 10:34:22 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.656 10:34:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.656 [2024-11-19 10:34:22.146064] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:34.656 [2024-11-19 10:34:22.146157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1218514 ] 00:06:34.915 [2024-11-19 10:34:22.513401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.173 [2024-11-19 10:34:22.557557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.740 10:34:23 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.740 10:34:23 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:35.740 10:34:23 json_config -- json_config/common.sh@26 -- # echo '' 00:06:35.740 00:06:35.740 10:34:23 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:35.740 10:34:23 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:35.740 10:34:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:35.740 10:34:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.740 10:34:23 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:35.740 10:34:23 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:35.740 10:34:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:35.740 10:34:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.740 10:34:23 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:35.740 10:34:23 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:35.740 10:34:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:39.027 10:34:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:39.027 10:34:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:39.027 10:34:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@54 -- # sort 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:39.027 10:34:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:39.027 10:34:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:39.027 10:34:26 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:39.286 10:34:26 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:39.286 10:34:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:39.286 10:34:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.286 10:34:26 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:39.286 10:34:26 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:39.286 10:34:26 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:39.286 10:34:26 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:39.286 10:34:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:39.286 MallocForNvmf0 00:06:39.544 10:34:26 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:39.544 10:34:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:39.802 MallocForNvmf1 00:06:39.802 10:34:27 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:39.802 10:34:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:40.060 [2024-11-19 10:34:27.431657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.060 10:34:27 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:40.060 10:34:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:40.318 10:34:27 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:40.318 10:34:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:40.576 10:34:27 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:40.576 10:34:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:40.835 10:34:28 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:40.835 10:34:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:41.093 [2024-11-19 10:34:28.486993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:41.093 10:34:28 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:41.093 10:34:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:41.093 10:34:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.093 10:34:28 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:41.093 10:34:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:41.093 10:34:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.093 10:34:28 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:41.093 10:34:28 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:41.093 10:34:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:41.351 MallocBdevForConfigChangeCheck 00:06:41.351 10:34:28 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:41.351 10:34:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:41.351 10:34:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.351 10:34:28 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:41.351 10:34:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:41.918 10:34:29 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:41.918 INFO: shutting down applications... 00:06:41.918 10:34:29 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:41.918 10:34:29 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:41.918 10:34:29 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:41.918 10:34:29 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:43.291 Calling clear_iscsi_subsystem 00:06:43.291 Calling clear_nvmf_subsystem 00:06:43.291 Calling clear_nbd_subsystem 00:06:43.291 Calling clear_ublk_subsystem 00:06:43.291 Calling clear_vhost_blk_subsystem 00:06:43.291 Calling clear_vhost_scsi_subsystem 00:06:43.291 Calling clear_bdev_subsystem 00:06:43.291 10:34:30 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:43.291 10:34:30 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:43.291 10:34:30 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:43.291 10:34:30 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:43.291 10:34:30 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:43.291 10:34:30 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:43.858 10:34:31 json_config -- json_config/json_config.sh@352 -- # break 00:06:43.858 10:34:31 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:43.858 10:34:31 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:43.858 10:34:31 json_config -- json_config/common.sh@31 -- # local app=target 00:06:43.858 10:34:31 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:43.858 10:34:31 json_config -- json_config/common.sh@35 -- # [[ -n 1218514 ]] 00:06:43.858 10:34:31 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1218514 00:06:43.858 10:34:31 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:43.858 10:34:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:43.858 10:34:31 json_config -- json_config/common.sh@41 -- # kill -0 1218514 00:06:43.858 10:34:31 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:44.427 10:34:31 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:44.427 10:34:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:44.427 10:34:31 json_config -- json_config/common.sh@41 -- # kill -0 1218514 00:06:44.427 10:34:31 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:44.427 10:34:31 json_config -- json_config/common.sh@43 -- # break 00:06:44.427 10:34:31 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:44.427 10:34:31 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:44.427 SPDK target shutdown done 00:06:44.427 10:34:31 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:44.427 INFO: relaunching applications... 00:06:44.427 10:34:31 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:44.427 10:34:31 json_config -- json_config/common.sh@9 -- # local app=target 00:06:44.427 10:34:31 json_config -- json_config/common.sh@10 -- # shift 00:06:44.427 10:34:31 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:44.427 10:34:31 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:44.427 10:34:31 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:44.427 10:34:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:44.427 10:34:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:44.427 10:34:31 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1219717 00:06:44.427 10:34:31 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:44.427 10:34:31 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:44.427 Waiting for target to run... 00:06:44.427 10:34:31 json_config -- json_config/common.sh@25 -- # waitforlisten 1219717 /var/tmp/spdk_tgt.sock 00:06:44.428 10:34:31 json_config -- common/autotest_common.sh@835 -- # '[' -z 1219717 ']' 00:06:44.428 10:34:31 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:44.428 10:34:31 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.428 10:34:31 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:44.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:44.428 10:34:31 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.428 10:34:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.428 [2024-11-19 10:34:31.796681] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:44.428 [2024-11-19 10:34:31.796764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1219717 ] 00:06:44.685 [2024-11-19 10:34:32.302976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.944 [2024-11-19 10:34:32.356801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.230 [2024-11-19 10:34:35.412265] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:48.230 [2024-11-19 10:34:35.444760] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:48.230 10:34:35 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.230 10:34:35 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:48.230 10:34:35 json_config -- json_config/common.sh@26 -- # echo '' 00:06:48.230 00:06:48.230 10:34:35 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:48.230 10:34:35 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:48.230 INFO: Checking if target configuration is the same... 00:06:48.230 10:34:35 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:48.230 10:34:35 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:48.230 10:34:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:48.230 + '[' 2 -ne 2 ']' 00:06:48.230 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:48.230 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:48.230 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:48.230 +++ basename /dev/fd/62 00:06:48.230 ++ mktemp /tmp/62.XXX 00:06:48.230 + tmp_file_1=/tmp/62.OEp 00:06:48.230 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:48.230 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:48.230 + tmp_file_2=/tmp/spdk_tgt_config.json.gm9 00:06:48.230 + ret=0 00:06:48.230 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:48.488 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:48.488 + diff -u /tmp/62.OEp /tmp/spdk_tgt_config.json.gm9 00:06:48.488 + echo 'INFO: JSON config files are the same' 00:06:48.488 INFO: JSON config files are the same 00:06:48.488 + rm /tmp/62.OEp /tmp/spdk_tgt_config.json.gm9 00:06:48.488 + exit 0 00:06:48.488 10:34:35 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:48.488 10:34:35 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:48.488 INFO: changing configuration and checking if this can be detected... 00:06:48.488 10:34:35 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:48.488 10:34:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:48.746 10:34:36 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:48.746 10:34:36 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:48.746 10:34:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:48.746 + '[' 2 -ne 2 ']' 00:06:48.746 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:48.746 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:48.746 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:48.746 +++ basename /dev/fd/62 00:06:48.746 ++ mktemp /tmp/62.XXX 00:06:48.746 + tmp_file_1=/tmp/62.iSy 00:06:48.746 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:48.746 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:48.746 + tmp_file_2=/tmp/spdk_tgt_config.json.SpO 00:06:48.746 + ret=0 00:06:48.746 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:49.004 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:49.263 + diff -u /tmp/62.iSy /tmp/spdk_tgt_config.json.SpO 00:06:49.263 + ret=1 00:06:49.263 + echo '=== Start of file: /tmp/62.iSy ===' 00:06:49.263 + cat /tmp/62.iSy 00:06:49.263 + echo '=== End of file: /tmp/62.iSy ===' 00:06:49.263 + echo '' 00:06:49.263 + echo '=== Start of file: /tmp/spdk_tgt_config.json.SpO ===' 00:06:49.263 + cat /tmp/spdk_tgt_config.json.SpO 00:06:49.263 + echo '=== End of file: /tmp/spdk_tgt_config.json.SpO ===' 00:06:49.263 + echo '' 00:06:49.263 + rm /tmp/62.iSy /tmp/spdk_tgt_config.json.SpO 00:06:49.263 + exit 1 00:06:49.263 10:34:36 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:49.263 INFO: configuration change detected. 00:06:49.263 10:34:36 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:49.263 10:34:36 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:49.263 10:34:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:49.263 10:34:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:49.263 10:34:36 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:49.263 10:34:36 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:49.263 10:34:36 json_config -- json_config/json_config.sh@324 -- # [[ -n 1219717 ]] 00:06:49.263 10:34:36 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:49.263 10:34:36 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:49.263 10:34:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:49.263 10:34:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:49.263 10:34:36 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:49.263 10:34:36 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:49.263 10:34:36 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:49.263 10:34:36 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:49.263 10:34:36 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:49.263 10:34:36 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:49.263 10:34:36 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:49.263 10:34:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:49.263 10:34:36 json_config -- json_config/json_config.sh@330 -- # killprocess 1219717 00:06:49.263 10:34:36 json_config -- common/autotest_common.sh@954 -- # '[' -z 1219717 ']' 00:06:49.263 10:34:36 json_config -- common/autotest_common.sh@958 -- # kill -0 1219717 00:06:49.263 10:34:36 json_config -- common/autotest_common.sh@959 -- # uname 00:06:49.263 10:34:36 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.263 10:34:36 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1219717 00:06:49.263 10:34:36 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.263 10:34:36 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.263 10:34:36 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1219717' 00:06:49.263 killing process with pid 1219717 00:06:49.263 10:34:36 json_config -- common/autotest_common.sh@973 -- # kill 1219717 00:06:49.263 10:34:36 json_config -- common/autotest_common.sh@978 -- # wait 1219717 00:06:51.164 10:34:38 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:51.164 10:34:38 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:51.164 10:34:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:51.164 10:34:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:51.164 10:34:38 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:51.164 10:34:38 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:51.164 INFO: Success 00:06:51.164 00:06:51.164 real 0m16.444s 00:06:51.164 user 0m18.013s 00:06:51.164 sys 0m2.629s 00:06:51.164 10:34:38 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.164 10:34:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:51.164 ************************************ 00:06:51.164 END TEST json_config 00:06:51.164 ************************************ 00:06:51.164 10:34:38 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:51.164 10:34:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.164 10:34:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.164 10:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:51.164 ************************************ 00:06:51.164 START TEST json_config_extra_key 00:06:51.164 ************************************ 00:06:51.164 10:34:38 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:51.164 10:34:38 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:51.164 10:34:38 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:51.164 10:34:38 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:51.164 10:34:38 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.164 10:34:38 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:51.164 10:34:38 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.164 10:34:38 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:51.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.164 --rc genhtml_branch_coverage=1 00:06:51.164 --rc genhtml_function_coverage=1 00:06:51.164 --rc genhtml_legend=1 00:06:51.164 --rc geninfo_all_blocks=1 00:06:51.164 --rc geninfo_unexecuted_blocks=1 00:06:51.164 00:06:51.164 ' 00:06:51.164 10:34:38 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:51.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.164 --rc genhtml_branch_coverage=1 00:06:51.164 --rc genhtml_function_coverage=1 00:06:51.164 --rc genhtml_legend=1 00:06:51.165 --rc geninfo_all_blocks=1 00:06:51.165 --rc geninfo_unexecuted_blocks=1 00:06:51.165 00:06:51.165 ' 00:06:51.165 10:34:38 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:51.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.165 --rc genhtml_branch_coverage=1 00:06:51.165 --rc genhtml_function_coverage=1 00:06:51.165 --rc genhtml_legend=1 00:06:51.165 --rc geninfo_all_blocks=1 00:06:51.165 --rc geninfo_unexecuted_blocks=1 00:06:51.165 00:06:51.165 ' 00:06:51.165 10:34:38 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:51.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.165 --rc genhtml_branch_coverage=1 00:06:51.165 --rc genhtml_function_coverage=1 00:06:51.165 --rc genhtml_legend=1 00:06:51.165 --rc geninfo_all_blocks=1 00:06:51.165 --rc geninfo_unexecuted_blocks=1 00:06:51.165 00:06:51.165 ' 00:06:51.165 10:34:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:51.165 10:34:38 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:51.165 10:34:38 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.165 10:34:38 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.165 10:34:38 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.165 10:34:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.165 10:34:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.165 10:34:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.165 10:34:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:51.165 10:34:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:51.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:51.165 10:34:38 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:51.165 10:34:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:51.165 10:34:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:51.165 10:34:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:51.165 10:34:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:51.165 10:34:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:51.165 10:34:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:51.165 10:34:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:51.165 10:34:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:51.165 10:34:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:51.165 10:34:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:51.165 10:34:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:51.165 INFO: launching applications... 00:06:51.165 10:34:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:51.165 10:34:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:51.165 10:34:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:51.165 10:34:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:51.165 10:34:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:51.165 10:34:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:51.165 10:34:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:51.165 10:34:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:51.165 10:34:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1220653 00:06:51.165 10:34:38 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:51.165 10:34:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:51.165 Waiting for target to run... 00:06:51.165 10:34:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1220653 /var/tmp/spdk_tgt.sock 00:06:51.165 10:34:38 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1220653 ']' 00:06:51.165 10:34:38 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:51.165 10:34:38 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.165 10:34:38 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:51.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:51.165 10:34:38 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.165 10:34:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:51.165 [2024-11-19 10:34:38.616599] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:51.165 [2024-11-19 10:34:38.616690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1220653 ] 00:06:51.424 [2024-11-19 10:34:38.948985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.424 [2024-11-19 10:34:38.990344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.994 10:34:39 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.994 10:34:39 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:51.994 10:34:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:51.994 00:06:51.994 10:34:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:51.994 INFO: shutting down applications... 00:06:51.994 10:34:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:51.994 10:34:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:51.994 10:34:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:51.994 10:34:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1220653 ]] 00:06:51.994 10:34:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1220653 00:06:51.994 10:34:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:51.994 10:34:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:51.994 10:34:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1220653 00:06:51.994 10:34:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:52.562 10:34:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:52.562 10:34:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:52.562 10:34:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1220653 00:06:52.562 10:34:40 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:52.562 10:34:40 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:52.562 10:34:40 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:52.562 10:34:40 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:52.562 SPDK target shutdown done 00:06:52.562 10:34:40 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:52.562 Success 00:06:52.562 00:06:52.562 real 0m1.681s 00:06:52.562 user 0m1.671s 00:06:52.562 sys 0m0.461s 00:06:52.562 10:34:40 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.562 10:34:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:52.562 ************************************ 00:06:52.562 END TEST json_config_extra_key 00:06:52.562 ************************************ 00:06:52.562 10:34:40 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:52.562 10:34:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.562 10:34:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.562 10:34:40 -- common/autotest_common.sh@10 -- # set +x 00:06:52.562 ************************************ 00:06:52.562 START TEST alias_rpc 00:06:52.562 ************************************ 00:06:52.562 10:34:40 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:52.821 * Looking for test storage... 00:06:52.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:52.821 10:34:40 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:52.821 10:34:40 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:52.821 10:34:40 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:52.821 10:34:40 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:52.821 10:34:40 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.821 10:34:40 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.821 10:34:40 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.822 10:34:40 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:52.822 10:34:40 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.822 10:34:40 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:52.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.822 --rc genhtml_branch_coverage=1 00:06:52.822 --rc genhtml_function_coverage=1 00:06:52.822 --rc genhtml_legend=1 00:06:52.822 --rc geninfo_all_blocks=1 00:06:52.822 --rc geninfo_unexecuted_blocks=1 00:06:52.822 00:06:52.822 ' 00:06:52.822 10:34:40 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:52.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.822 --rc genhtml_branch_coverage=1 00:06:52.822 --rc genhtml_function_coverage=1 00:06:52.822 --rc genhtml_legend=1 00:06:52.822 --rc geninfo_all_blocks=1 00:06:52.822 --rc geninfo_unexecuted_blocks=1 00:06:52.822 00:06:52.822 ' 00:06:52.822 10:34:40 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:52.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.822 --rc genhtml_branch_coverage=1 00:06:52.822 --rc genhtml_function_coverage=1 00:06:52.822 --rc genhtml_legend=1 00:06:52.822 --rc geninfo_all_blocks=1 00:06:52.822 --rc geninfo_unexecuted_blocks=1 00:06:52.822 00:06:52.822 ' 00:06:52.822 10:34:40 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:52.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.822 --rc genhtml_branch_coverage=1 00:06:52.822 --rc genhtml_function_coverage=1 00:06:52.822 --rc genhtml_legend=1 00:06:52.822 --rc geninfo_all_blocks=1 00:06:52.822 --rc geninfo_unexecuted_blocks=1 00:06:52.822 00:06:52.822 ' 00:06:52.822 10:34:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:52.822 10:34:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1220965 00:06:52.822 10:34:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:52.822 10:34:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1220965 00:06:52.822 10:34:40 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1220965 ']' 00:06:52.822 10:34:40 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.822 10:34:40 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.822 10:34:40 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.822 10:34:40 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.822 10:34:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.822 [2024-11-19 10:34:40.350821] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:52.822 [2024-11-19 10:34:40.350925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1220965 ] 00:06:52.822 [2024-11-19 10:34:40.420060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.081 [2024-11-19 10:34:40.482515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.339 10:34:40 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.339 10:34:40 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.339 10:34:40 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:53.602 10:34:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1220965 00:06:53.602 10:34:41 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1220965 ']' 00:06:53.602 10:34:41 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1220965 00:06:53.602 10:34:41 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:53.602 10:34:41 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.602 10:34:41 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1220965 00:06:53.602 10:34:41 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.602 10:34:41 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.602 10:34:41 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1220965' 00:06:53.602 killing process with pid 1220965 00:06:53.602 10:34:41 alias_rpc -- common/autotest_common.sh@973 -- # kill 1220965 00:06:53.602 10:34:41 alias_rpc -- common/autotest_common.sh@978 -- # wait 1220965 00:06:54.168 00:06:54.168 real 0m1.360s 00:06:54.168 user 0m1.491s 00:06:54.168 sys 0m0.443s 00:06:54.168 10:34:41 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.168 10:34:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.168 ************************************ 00:06:54.168 END TEST alias_rpc 00:06:54.168 ************************************ 00:06:54.168 10:34:41 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:54.168 10:34:41 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:54.168 10:34:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.168 10:34:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.168 10:34:41 -- common/autotest_common.sh@10 -- # set +x 00:06:54.168 ************************************ 00:06:54.168 START TEST spdkcli_tcp 00:06:54.168 ************************************ 00:06:54.168 10:34:41 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:54.168 * Looking for test storage... 00:06:54.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:54.168 10:34:41 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:54.168 10:34:41 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:54.168 10:34:41 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:54.168 10:34:41 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:54.168 10:34:41 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.169 10:34:41 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:54.169 10:34:41 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.169 10:34:41 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:54.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.169 --rc genhtml_branch_coverage=1 00:06:54.169 --rc genhtml_function_coverage=1 00:06:54.169 --rc genhtml_legend=1 00:06:54.169 --rc geninfo_all_blocks=1 00:06:54.169 --rc geninfo_unexecuted_blocks=1 00:06:54.169 00:06:54.169 ' 00:06:54.169 10:34:41 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:54.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.169 --rc genhtml_branch_coverage=1 00:06:54.169 --rc genhtml_function_coverage=1 00:06:54.169 --rc genhtml_legend=1 00:06:54.169 --rc geninfo_all_blocks=1 00:06:54.169 --rc geninfo_unexecuted_blocks=1 00:06:54.169 00:06:54.169 ' 00:06:54.169 10:34:41 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:54.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.169 --rc genhtml_branch_coverage=1 00:06:54.169 --rc genhtml_function_coverage=1 00:06:54.169 --rc genhtml_legend=1 00:06:54.169 --rc geninfo_all_blocks=1 00:06:54.169 --rc geninfo_unexecuted_blocks=1 00:06:54.169 00:06:54.169 ' 00:06:54.169 10:34:41 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:54.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.169 --rc genhtml_branch_coverage=1 00:06:54.169 --rc genhtml_function_coverage=1 00:06:54.169 --rc genhtml_legend=1 00:06:54.169 --rc geninfo_all_blocks=1 00:06:54.169 --rc geninfo_unexecuted_blocks=1 00:06:54.169 00:06:54.169 ' 00:06:54.169 10:34:41 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:54.169 10:34:41 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:54.169 10:34:41 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:54.169 10:34:41 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:54.169 10:34:41 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:54.169 10:34:41 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:54.169 10:34:41 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:54.169 10:34:41 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:54.169 10:34:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:54.169 10:34:41 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1221165 00:06:54.169 10:34:41 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:54.169 10:34:41 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1221165 00:06:54.169 10:34:41 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1221165 ']' 00:06:54.169 10:34:41 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.169 10:34:41 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.169 10:34:41 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.169 10:34:41 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.169 10:34:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:54.169 [2024-11-19 10:34:41.762178] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:54.169 [2024-11-19 10:34:41.762268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1221165 ] 00:06:54.428 [2024-11-19 10:34:41.828844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.428 [2024-11-19 10:34:41.888120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.428 [2024-11-19 10:34:41.888125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.685 10:34:42 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.685 10:34:42 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:54.685 10:34:42 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1221183 00:06:54.685 10:34:42 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:54.685 10:34:42 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:54.944 [ 00:06:54.944 "bdev_malloc_delete", 00:06:54.944 "bdev_malloc_create", 00:06:54.944 "bdev_null_resize", 00:06:54.944 "bdev_null_delete", 00:06:54.944 "bdev_null_create", 00:06:54.944 "bdev_nvme_cuse_unregister", 00:06:54.944 "bdev_nvme_cuse_register", 00:06:54.944 "bdev_opal_new_user", 00:06:54.944 "bdev_opal_set_lock_state", 00:06:54.944 "bdev_opal_delete", 00:06:54.944 "bdev_opal_get_info", 00:06:54.944 "bdev_opal_create", 00:06:54.944 "bdev_nvme_opal_revert", 00:06:54.944 "bdev_nvme_opal_init", 00:06:54.944 "bdev_nvme_send_cmd", 00:06:54.944 "bdev_nvme_set_keys", 00:06:54.944 "bdev_nvme_get_path_iostat", 00:06:54.944 "bdev_nvme_get_mdns_discovery_info", 00:06:54.944 "bdev_nvme_stop_mdns_discovery", 00:06:54.944 "bdev_nvme_start_mdns_discovery", 00:06:54.944 "bdev_nvme_set_multipath_policy", 00:06:54.944 "bdev_nvme_set_preferred_path", 00:06:54.944 "bdev_nvme_get_io_paths", 00:06:54.944 "bdev_nvme_remove_error_injection", 00:06:54.944 "bdev_nvme_add_error_injection", 00:06:54.944 "bdev_nvme_get_discovery_info", 00:06:54.944 "bdev_nvme_stop_discovery", 00:06:54.944 "bdev_nvme_start_discovery", 00:06:54.944 "bdev_nvme_get_controller_health_info", 00:06:54.944 "bdev_nvme_disable_controller", 00:06:54.944 "bdev_nvme_enable_controller", 00:06:54.944 "bdev_nvme_reset_controller", 00:06:54.944 "bdev_nvme_get_transport_statistics", 00:06:54.944 "bdev_nvme_apply_firmware", 00:06:54.944 "bdev_nvme_detach_controller", 00:06:54.944 "bdev_nvme_get_controllers", 00:06:54.944 "bdev_nvme_attach_controller", 00:06:54.944 "bdev_nvme_set_hotplug", 00:06:54.944 "bdev_nvme_set_options", 00:06:54.944 "bdev_passthru_delete", 00:06:54.944 "bdev_passthru_create", 00:06:54.944 "bdev_lvol_set_parent_bdev", 00:06:54.944 "bdev_lvol_set_parent", 00:06:54.944 "bdev_lvol_check_shallow_copy", 00:06:54.944 "bdev_lvol_start_shallow_copy", 00:06:54.944 "bdev_lvol_grow_lvstore", 00:06:54.944 "bdev_lvol_get_lvols", 00:06:54.944 "bdev_lvol_get_lvstores", 00:06:54.944 "bdev_lvol_delete", 00:06:54.944 "bdev_lvol_set_read_only", 00:06:54.944 "bdev_lvol_resize", 00:06:54.944 "bdev_lvol_decouple_parent", 00:06:54.944 "bdev_lvol_inflate", 00:06:54.944 "bdev_lvol_rename", 00:06:54.944 "bdev_lvol_clone_bdev", 00:06:54.944 "bdev_lvol_clone", 00:06:54.944 "bdev_lvol_snapshot", 00:06:54.944 "bdev_lvol_create", 00:06:54.944 "bdev_lvol_delete_lvstore", 00:06:54.944 "bdev_lvol_rename_lvstore", 00:06:54.944 "bdev_lvol_create_lvstore", 00:06:54.944 "bdev_raid_set_options", 00:06:54.944 "bdev_raid_remove_base_bdev", 00:06:54.944 "bdev_raid_add_base_bdev", 00:06:54.944 "bdev_raid_delete", 00:06:54.944 "bdev_raid_create", 00:06:54.944 "bdev_raid_get_bdevs", 00:06:54.944 "bdev_error_inject_error", 00:06:54.944 "bdev_error_delete", 00:06:54.944 "bdev_error_create", 00:06:54.944 "bdev_split_delete", 00:06:54.944 "bdev_split_create", 00:06:54.944 "bdev_delay_delete", 00:06:54.944 "bdev_delay_create", 00:06:54.944 "bdev_delay_update_latency", 00:06:54.944 "bdev_zone_block_delete", 00:06:54.944 "bdev_zone_block_create", 00:06:54.944 "blobfs_create", 00:06:54.944 "blobfs_detect", 00:06:54.944 "blobfs_set_cache_size", 00:06:54.944 "bdev_aio_delete", 00:06:54.944 "bdev_aio_rescan", 00:06:54.944 "bdev_aio_create", 00:06:54.944 "bdev_ftl_set_property", 00:06:54.944 "bdev_ftl_get_properties", 00:06:54.944 "bdev_ftl_get_stats", 00:06:54.944 "bdev_ftl_unmap", 00:06:54.944 "bdev_ftl_unload", 00:06:54.944 "bdev_ftl_delete", 00:06:54.944 "bdev_ftl_load", 00:06:54.944 "bdev_ftl_create", 00:06:54.944 "bdev_virtio_attach_controller", 00:06:54.944 "bdev_virtio_scsi_get_devices", 00:06:54.944 "bdev_virtio_detach_controller", 00:06:54.944 "bdev_virtio_blk_set_hotplug", 00:06:54.944 "bdev_iscsi_delete", 00:06:54.944 "bdev_iscsi_create", 00:06:54.944 "bdev_iscsi_set_options", 00:06:54.944 "accel_error_inject_error", 00:06:54.944 "ioat_scan_accel_module", 00:06:54.944 "dsa_scan_accel_module", 00:06:54.944 "iaa_scan_accel_module", 00:06:54.944 "vfu_virtio_create_fs_endpoint", 00:06:54.944 "vfu_virtio_create_scsi_endpoint", 00:06:54.944 "vfu_virtio_scsi_remove_target", 00:06:54.944 "vfu_virtio_scsi_add_target", 00:06:54.944 "vfu_virtio_create_blk_endpoint", 00:06:54.944 "vfu_virtio_delete_endpoint", 00:06:54.944 "keyring_file_remove_key", 00:06:54.944 "keyring_file_add_key", 00:06:54.944 "keyring_linux_set_options", 00:06:54.944 "fsdev_aio_delete", 00:06:54.944 "fsdev_aio_create", 00:06:54.944 "iscsi_get_histogram", 00:06:54.944 "iscsi_enable_histogram", 00:06:54.944 "iscsi_set_options", 00:06:54.944 "iscsi_get_auth_groups", 00:06:54.944 "iscsi_auth_group_remove_secret", 00:06:54.944 "iscsi_auth_group_add_secret", 00:06:54.944 "iscsi_delete_auth_group", 00:06:54.944 "iscsi_create_auth_group", 00:06:54.944 "iscsi_set_discovery_auth", 00:06:54.945 "iscsi_get_options", 00:06:54.945 "iscsi_target_node_request_logout", 00:06:54.945 "iscsi_target_node_set_redirect", 00:06:54.945 "iscsi_target_node_set_auth", 00:06:54.945 "iscsi_target_node_add_lun", 00:06:54.945 "iscsi_get_stats", 00:06:54.945 "iscsi_get_connections", 00:06:54.945 "iscsi_portal_group_set_auth", 00:06:54.945 "iscsi_start_portal_group", 00:06:54.945 "iscsi_delete_portal_group", 00:06:54.945 "iscsi_create_portal_group", 00:06:54.945 "iscsi_get_portal_groups", 00:06:54.945 "iscsi_delete_target_node", 00:06:54.945 "iscsi_target_node_remove_pg_ig_maps", 00:06:54.945 "iscsi_target_node_add_pg_ig_maps", 00:06:54.945 "iscsi_create_target_node", 00:06:54.945 "iscsi_get_target_nodes", 00:06:54.945 "iscsi_delete_initiator_group", 00:06:54.945 "iscsi_initiator_group_remove_initiators", 00:06:54.945 "iscsi_initiator_group_add_initiators", 00:06:54.945 "iscsi_create_initiator_group", 00:06:54.945 "iscsi_get_initiator_groups", 00:06:54.945 "nvmf_set_crdt", 00:06:54.945 "nvmf_set_config", 00:06:54.945 "nvmf_set_max_subsystems", 00:06:54.945 "nvmf_stop_mdns_prr", 00:06:54.945 "nvmf_publish_mdns_prr", 00:06:54.945 "nvmf_subsystem_get_listeners", 00:06:54.945 "nvmf_subsystem_get_qpairs", 00:06:54.945 "nvmf_subsystem_get_controllers", 00:06:54.945 "nvmf_get_stats", 00:06:54.945 "nvmf_get_transports", 00:06:54.945 "nvmf_create_transport", 00:06:54.945 "nvmf_get_targets", 00:06:54.945 "nvmf_delete_target", 00:06:54.945 "nvmf_create_target", 00:06:54.945 "nvmf_subsystem_allow_any_host", 00:06:54.945 "nvmf_subsystem_set_keys", 00:06:54.945 "nvmf_subsystem_remove_host", 00:06:54.945 "nvmf_subsystem_add_host", 00:06:54.945 "nvmf_ns_remove_host", 00:06:54.945 "nvmf_ns_add_host", 00:06:54.945 "nvmf_subsystem_remove_ns", 00:06:54.945 "nvmf_subsystem_set_ns_ana_group", 00:06:54.945 "nvmf_subsystem_add_ns", 00:06:54.945 "nvmf_subsystem_listener_set_ana_state", 00:06:54.945 "nvmf_discovery_get_referrals", 00:06:54.945 "nvmf_discovery_remove_referral", 00:06:54.945 "nvmf_discovery_add_referral", 00:06:54.945 "nvmf_subsystem_remove_listener", 00:06:54.945 "nvmf_subsystem_add_listener", 00:06:54.945 "nvmf_delete_subsystem", 00:06:54.945 "nvmf_create_subsystem", 00:06:54.945 "nvmf_get_subsystems", 00:06:54.945 "env_dpdk_get_mem_stats", 00:06:54.945 "nbd_get_disks", 00:06:54.945 "nbd_stop_disk", 00:06:54.945 "nbd_start_disk", 00:06:54.945 "ublk_recover_disk", 00:06:54.945 "ublk_get_disks", 00:06:54.945 "ublk_stop_disk", 00:06:54.945 "ublk_start_disk", 00:06:54.945 "ublk_destroy_target", 00:06:54.945 "ublk_create_target", 00:06:54.945 "virtio_blk_create_transport", 00:06:54.945 "virtio_blk_get_transports", 00:06:54.945 "vhost_controller_set_coalescing", 00:06:54.945 "vhost_get_controllers", 00:06:54.945 "vhost_delete_controller", 00:06:54.945 "vhost_create_blk_controller", 00:06:54.945 "vhost_scsi_controller_remove_target", 00:06:54.945 "vhost_scsi_controller_add_target", 00:06:54.945 "vhost_start_scsi_controller", 00:06:54.945 "vhost_create_scsi_controller", 00:06:54.945 "thread_set_cpumask", 00:06:54.945 "scheduler_set_options", 00:06:54.945 "framework_get_governor", 00:06:54.945 "framework_get_scheduler", 00:06:54.945 "framework_set_scheduler", 00:06:54.945 "framework_get_reactors", 00:06:54.945 "thread_get_io_channels", 00:06:54.945 "thread_get_pollers", 00:06:54.945 "thread_get_stats", 00:06:54.945 "framework_monitor_context_switch", 00:06:54.945 "spdk_kill_instance", 00:06:54.945 "log_enable_timestamps", 00:06:54.945 "log_get_flags", 00:06:54.945 "log_clear_flag", 00:06:54.945 "log_set_flag", 00:06:54.945 "log_get_level", 00:06:54.945 "log_set_level", 00:06:54.945 "log_get_print_level", 00:06:54.945 "log_set_print_level", 00:06:54.945 "framework_enable_cpumask_locks", 00:06:54.945 "framework_disable_cpumask_locks", 00:06:54.945 "framework_wait_init", 00:06:54.945 "framework_start_init", 00:06:54.945 "scsi_get_devices", 00:06:54.945 "bdev_get_histogram", 00:06:54.945 "bdev_enable_histogram", 00:06:54.945 "bdev_set_qos_limit", 00:06:54.945 "bdev_set_qd_sampling_period", 00:06:54.945 "bdev_get_bdevs", 00:06:54.945 "bdev_reset_iostat", 00:06:54.945 "bdev_get_iostat", 00:06:54.945 "bdev_examine", 00:06:54.945 "bdev_wait_for_examine", 00:06:54.945 "bdev_set_options", 00:06:54.945 "accel_get_stats", 00:06:54.945 "accel_set_options", 00:06:54.945 "accel_set_driver", 00:06:54.945 "accel_crypto_key_destroy", 00:06:54.945 "accel_crypto_keys_get", 00:06:54.945 "accel_crypto_key_create", 00:06:54.945 "accel_assign_opc", 00:06:54.945 "accel_get_module_info", 00:06:54.945 "accel_get_opc_assignments", 00:06:54.945 "vmd_rescan", 00:06:54.945 "vmd_remove_device", 00:06:54.945 "vmd_enable", 00:06:54.945 "sock_get_default_impl", 00:06:54.945 "sock_set_default_impl", 00:06:54.945 "sock_impl_set_options", 00:06:54.945 "sock_impl_get_options", 00:06:54.945 "iobuf_get_stats", 00:06:54.945 "iobuf_set_options", 00:06:54.945 "keyring_get_keys", 00:06:54.945 "vfu_tgt_set_base_path", 00:06:54.945 "framework_get_pci_devices", 00:06:54.945 "framework_get_config", 00:06:54.945 "framework_get_subsystems", 00:06:54.945 "fsdev_set_opts", 00:06:54.945 "fsdev_get_opts", 00:06:54.945 "trace_get_info", 00:06:54.945 "trace_get_tpoint_group_mask", 00:06:54.945 "trace_disable_tpoint_group", 00:06:54.945 "trace_enable_tpoint_group", 00:06:54.945 "trace_clear_tpoint_mask", 00:06:54.945 "trace_set_tpoint_mask", 00:06:54.945 "notify_get_notifications", 00:06:54.945 "notify_get_types", 00:06:54.945 "spdk_get_version", 00:06:54.945 "rpc_get_methods" 00:06:54.945 ] 00:06:54.945 10:34:42 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:54.945 10:34:42 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:54.945 10:34:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:54.945 10:34:42 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:54.945 10:34:42 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1221165 00:06:54.945 10:34:42 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1221165 ']' 00:06:54.945 10:34:42 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1221165 00:06:54.945 10:34:42 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:54.945 10:34:42 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.945 10:34:42 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1221165 00:06:54.945 10:34:42 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.945 10:34:42 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.945 10:34:42 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1221165' 00:06:54.945 killing process with pid 1221165 00:06:54.945 10:34:42 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1221165 00:06:54.945 10:34:42 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1221165 00:06:55.511 00:06:55.511 real 0m1.362s 00:06:55.511 user 0m2.438s 00:06:55.511 sys 0m0.473s 00:06:55.511 10:34:42 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.511 10:34:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:55.511 ************************************ 00:06:55.511 END TEST spdkcli_tcp 00:06:55.511 ************************************ 00:06:55.512 10:34:42 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:55.512 10:34:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.512 10:34:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.512 10:34:42 -- common/autotest_common.sh@10 -- # set +x 00:06:55.512 ************************************ 00:06:55.512 START TEST dpdk_mem_utility 00:06:55.512 ************************************ 00:06:55.512 10:34:42 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:55.512 * Looking for test storage... 00:06:55.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:55.512 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:55.512 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:55.512 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:55.512 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.512 10:34:43 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:55.512 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.512 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:55.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.512 --rc genhtml_branch_coverage=1 00:06:55.512 --rc genhtml_function_coverage=1 00:06:55.512 --rc genhtml_legend=1 00:06:55.512 --rc geninfo_all_blocks=1 00:06:55.512 --rc geninfo_unexecuted_blocks=1 00:06:55.512 00:06:55.512 ' 00:06:55.512 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:55.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.512 --rc genhtml_branch_coverage=1 00:06:55.512 --rc genhtml_function_coverage=1 00:06:55.512 --rc genhtml_legend=1 00:06:55.512 --rc geninfo_all_blocks=1 00:06:55.512 --rc geninfo_unexecuted_blocks=1 00:06:55.512 00:06:55.512 ' 00:06:55.512 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:55.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.512 --rc genhtml_branch_coverage=1 00:06:55.512 --rc genhtml_function_coverage=1 00:06:55.512 --rc genhtml_legend=1 00:06:55.512 --rc geninfo_all_blocks=1 00:06:55.512 --rc geninfo_unexecuted_blocks=1 00:06:55.512 00:06:55.512 ' 00:06:55.512 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:55.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.512 --rc genhtml_branch_coverage=1 00:06:55.512 --rc genhtml_function_coverage=1 00:06:55.512 --rc genhtml_legend=1 00:06:55.512 --rc geninfo_all_blocks=1 00:06:55.512 --rc geninfo_unexecuted_blocks=1 00:06:55.512 00:06:55.512 ' 00:06:55.512 10:34:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:55.512 10:34:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1221384 00:06:55.512 10:34:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:55.512 10:34:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1221384 00:06:55.512 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1221384 ']' 00:06:55.512 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.512 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.512 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.512 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.512 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:55.770 [2024-11-19 10:34:43.164952] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:55.770 [2024-11-19 10:34:43.165045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1221384 ] 00:06:55.770 [2024-11-19 10:34:43.229746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.770 [2024-11-19 10:34:43.288236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.028 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.028 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:56.028 10:34:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:56.028 10:34:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:56.028 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.028 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:56.028 { 00:06:56.028 "filename": "/tmp/spdk_mem_dump.txt" 00:06:56.028 } 00:06:56.028 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.028 10:34:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:56.028 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:56.028 1 heaps totaling size 810.000000 MiB 00:06:56.028 size: 810.000000 MiB heap id: 0 00:06:56.028 end heaps---------- 00:06:56.028 9 mempools totaling size 595.772034 MiB 00:06:56.028 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:56.028 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:56.028 size: 92.545471 MiB name: bdev_io_1221384 00:06:56.028 size: 50.003479 MiB name: msgpool_1221384 00:06:56.028 size: 36.509338 MiB name: fsdev_io_1221384 00:06:56.028 size: 21.763794 MiB name: PDU_Pool 00:06:56.028 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:56.028 size: 4.133484 MiB name: evtpool_1221384 00:06:56.028 size: 0.026123 MiB name: Session_Pool 00:06:56.028 end mempools------- 00:06:56.028 6 memzones totaling size 4.142822 MiB 00:06:56.028 size: 1.000366 MiB name: RG_ring_0_1221384 00:06:56.029 size: 1.000366 MiB name: RG_ring_1_1221384 00:06:56.029 size: 1.000366 MiB name: RG_ring_4_1221384 00:06:56.029 size: 1.000366 MiB name: RG_ring_5_1221384 00:06:56.029 size: 0.125366 MiB name: RG_ring_2_1221384 00:06:56.029 size: 0.015991 MiB name: RG_ring_3_1221384 00:06:56.029 end memzones------- 00:06:56.029 10:34:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:56.287 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:56.287 list of free elements. size: 10.862488 MiB 00:06:56.287 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:56.287 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:56.287 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:56.287 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:56.287 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:56.287 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:56.287 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:56.287 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:56.287 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:56.287 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:56.287 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:56.287 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:56.287 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:56.287 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:56.287 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:56.287 list of standard malloc elements. size: 199.218628 MiB 00:06:56.287 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:56.287 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:56.287 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:56.287 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:56.287 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:56.287 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:56.287 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:56.287 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:56.287 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:56.287 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:56.287 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:56.287 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:56.287 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:56.287 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:56.287 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:56.287 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:56.287 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:56.287 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:56.287 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:56.287 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:56.288 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:56.288 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:56.288 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:56.288 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:56.288 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:56.288 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:56.288 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:56.288 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:56.288 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:56.288 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:56.288 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:56.288 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:56.288 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:56.288 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:56.288 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:56.288 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:56.288 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:56.288 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:56.288 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:56.288 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:56.288 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:56.288 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:56.288 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:56.288 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:56.288 list of memzone associated elements. size: 599.918884 MiB 00:06:56.288 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:56.288 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:56.288 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:56.288 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:56.288 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:56.288 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1221384_0 00:06:56.288 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:56.288 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1221384_0 00:06:56.288 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:56.288 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1221384_0 00:06:56.288 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:56.288 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:56.288 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:56.288 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:56.288 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:56.288 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1221384_0 00:06:56.288 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:56.288 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1221384 00:06:56.288 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:56.288 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1221384 00:06:56.288 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:56.288 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:56.288 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:56.288 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:56.288 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:56.288 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:56.288 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:56.288 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:56.288 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:56.288 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1221384 00:06:56.288 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:56.288 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1221384 00:06:56.288 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:56.288 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1221384 00:06:56.288 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:56.288 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1221384 00:06:56.288 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:56.288 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1221384 00:06:56.288 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:56.288 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1221384 00:06:56.288 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:56.288 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:56.288 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:56.288 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:56.288 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:56.288 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:56.288 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:56.288 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1221384 00:06:56.288 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:56.288 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1221384 00:06:56.288 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:56.288 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:56.288 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:56.288 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:56.288 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:56.288 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1221384 00:06:56.288 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:56.288 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:56.288 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:56.288 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1221384 00:06:56.288 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:56.288 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1221384 00:06:56.288 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:56.288 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1221384 00:06:56.288 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:56.288 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:56.288 10:34:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:56.288 10:34:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1221384 00:06:56.288 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1221384 ']' 00:06:56.288 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1221384 00:06:56.288 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:56.288 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.288 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1221384 00:06:56.288 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.288 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.288 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1221384' 00:06:56.288 killing process with pid 1221384 00:06:56.288 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1221384 00:06:56.288 10:34:43 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1221384 00:06:56.546 00:06:56.546 real 0m1.155s 00:06:56.546 user 0m1.129s 00:06:56.546 sys 0m0.432s 00:06:56.546 10:34:44 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.546 10:34:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:56.547 ************************************ 00:06:56.547 END TEST dpdk_mem_utility 00:06:56.547 ************************************ 00:06:56.547 10:34:44 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:56.547 10:34:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.547 10:34:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.547 10:34:44 -- common/autotest_common.sh@10 -- # set +x 00:06:56.805 ************************************ 00:06:56.805 START TEST event 00:06:56.805 ************************************ 00:06:56.805 10:34:44 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:56.805 * Looking for test storage... 00:06:56.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:56.805 10:34:44 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:56.805 10:34:44 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:56.805 10:34:44 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:56.805 10:34:44 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:56.805 10:34:44 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.805 10:34:44 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.805 10:34:44 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.805 10:34:44 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.805 10:34:44 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.805 10:34:44 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.805 10:34:44 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.805 10:34:44 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.805 10:34:44 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.805 10:34:44 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.805 10:34:44 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.805 10:34:44 event -- scripts/common.sh@344 -- # case "$op" in 00:06:56.805 10:34:44 event -- scripts/common.sh@345 -- # : 1 00:06:56.805 10:34:44 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.805 10:34:44 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.805 10:34:44 event -- scripts/common.sh@365 -- # decimal 1 00:06:56.805 10:34:44 event -- scripts/common.sh@353 -- # local d=1 00:06:56.805 10:34:44 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.805 10:34:44 event -- scripts/common.sh@355 -- # echo 1 00:06:56.805 10:34:44 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.805 10:34:44 event -- scripts/common.sh@366 -- # decimal 2 00:06:56.805 10:34:44 event -- scripts/common.sh@353 -- # local d=2 00:06:56.805 10:34:44 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.805 10:34:44 event -- scripts/common.sh@355 -- # echo 2 00:06:56.805 10:34:44 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.805 10:34:44 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.805 10:34:44 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.805 10:34:44 event -- scripts/common.sh@368 -- # return 0 00:06:56.805 10:34:44 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.805 10:34:44 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:56.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.805 --rc genhtml_branch_coverage=1 00:06:56.805 --rc genhtml_function_coverage=1 00:06:56.805 --rc genhtml_legend=1 00:06:56.805 --rc geninfo_all_blocks=1 00:06:56.805 --rc geninfo_unexecuted_blocks=1 00:06:56.805 00:06:56.805 ' 00:06:56.805 10:34:44 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:56.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.805 --rc genhtml_branch_coverage=1 00:06:56.805 --rc genhtml_function_coverage=1 00:06:56.805 --rc genhtml_legend=1 00:06:56.805 --rc geninfo_all_blocks=1 00:06:56.805 --rc geninfo_unexecuted_blocks=1 00:06:56.805 00:06:56.805 ' 00:06:56.805 10:34:44 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:56.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.805 --rc genhtml_branch_coverage=1 00:06:56.805 --rc genhtml_function_coverage=1 00:06:56.805 --rc genhtml_legend=1 00:06:56.805 --rc geninfo_all_blocks=1 00:06:56.805 --rc geninfo_unexecuted_blocks=1 00:06:56.805 00:06:56.805 ' 00:06:56.805 10:34:44 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:56.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.805 --rc genhtml_branch_coverage=1 00:06:56.805 --rc genhtml_function_coverage=1 00:06:56.805 --rc genhtml_legend=1 00:06:56.805 --rc geninfo_all_blocks=1 00:06:56.805 --rc geninfo_unexecuted_blocks=1 00:06:56.805 00:06:56.805 ' 00:06:56.805 10:34:44 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:56.805 10:34:44 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:56.805 10:34:44 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:56.805 10:34:44 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:56.805 10:34:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.805 10:34:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:56.805 ************************************ 00:06:56.805 START TEST event_perf 00:06:56.805 ************************************ 00:06:56.806 10:34:44 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:56.806 Running I/O for 1 seconds...[2024-11-19 10:34:44.348376] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:56.806 [2024-11-19 10:34:44.348433] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1221582 ] 00:06:56.806 [2024-11-19 10:34:44.416142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:57.063 [2024-11-19 10:34:44.482034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.063 [2024-11-19 10:34:44.482097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.063 [2024-11-19 10:34:44.482164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.063 [2024-11-19 10:34:44.482167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.997 Running I/O for 1 seconds... 00:06:57.997 lcore 0: 236783 00:06:57.997 lcore 1: 236782 00:06:57.997 lcore 2: 236781 00:06:57.997 lcore 3: 236782 00:06:57.997 done. 00:06:57.997 00:06:57.997 real 0m1.211s 00:06:57.997 user 0m4.132s 00:06:57.997 sys 0m0.073s 00:06:57.997 10:34:45 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.997 10:34:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:57.997 ************************************ 00:06:57.997 END TEST event_perf 00:06:57.997 ************************************ 00:06:57.997 10:34:45 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:57.997 10:34:45 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:57.997 10:34:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.997 10:34:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.997 ************************************ 00:06:57.997 START TEST event_reactor 00:06:57.997 ************************************ 00:06:57.997 10:34:45 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:57.997 [2024-11-19 10:34:45.607655] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:57.997 [2024-11-19 10:34:45.607718] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1221746 ] 00:06:58.255 [2024-11-19 10:34:45.675462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.255 [2024-11-19 10:34:45.732356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.188 test_start 00:06:59.188 oneshot 00:06:59.188 tick 100 00:06:59.188 tick 100 00:06:59.188 tick 250 00:06:59.188 tick 100 00:06:59.188 tick 100 00:06:59.188 tick 100 00:06:59.188 tick 250 00:06:59.188 tick 500 00:06:59.188 tick 100 00:06:59.188 tick 100 00:06:59.188 tick 250 00:06:59.188 tick 100 00:06:59.188 tick 100 00:06:59.188 test_end 00:06:59.188 00:06:59.188 real 0m1.200s 00:06:59.188 user 0m1.129s 00:06:59.188 sys 0m0.066s 00:06:59.188 10:34:46 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.188 10:34:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:59.188 ************************************ 00:06:59.188 END TEST event_reactor 00:06:59.188 ************************************ 00:06:59.447 10:34:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:59.447 10:34:46 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:59.447 10:34:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.447 10:34:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.447 ************************************ 00:06:59.447 START TEST event_reactor_perf 00:06:59.447 ************************************ 00:06:59.447 10:34:46 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:59.447 [2024-11-19 10:34:46.851925] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:59.447 [2024-11-19 10:34:46.851976] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1222006 ] 00:06:59.447 [2024-11-19 10:34:46.914772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.447 [2024-11-19 10:34:46.970264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.819 test_start 00:07:00.819 test_end 00:07:00.819 Performance: 448033 events per second 00:07:00.819 00:07:00.819 real 0m1.195s 00:07:00.819 user 0m1.128s 00:07:00.819 sys 0m0.062s 00:07:00.819 10:34:48 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.819 10:34:48 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:00.819 ************************************ 00:07:00.819 END TEST event_reactor_perf 00:07:00.819 ************************************ 00:07:00.819 10:34:48 event -- event/event.sh@49 -- # uname -s 00:07:00.819 10:34:48 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:00.819 10:34:48 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:00.819 10:34:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.819 10:34:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.819 10:34:48 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.819 ************************************ 00:07:00.819 START TEST event_scheduler 00:07:00.819 ************************************ 00:07:00.819 10:34:48 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:00.819 * Looking for test storage... 00:07:00.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:00.819 10:34:48 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.819 10:34:48 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.819 10:34:48 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.819 10:34:48 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.819 10:34:48 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:00.819 10:34:48 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.819 10:34:48 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.819 --rc genhtml_branch_coverage=1 00:07:00.819 --rc genhtml_function_coverage=1 00:07:00.819 --rc genhtml_legend=1 00:07:00.819 --rc geninfo_all_blocks=1 00:07:00.819 --rc geninfo_unexecuted_blocks=1 00:07:00.819 00:07:00.819 ' 00:07:00.819 10:34:48 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.819 --rc genhtml_branch_coverage=1 00:07:00.819 --rc genhtml_function_coverage=1 00:07:00.819 --rc genhtml_legend=1 00:07:00.819 --rc geninfo_all_blocks=1 00:07:00.819 --rc geninfo_unexecuted_blocks=1 00:07:00.819 00:07:00.819 ' 00:07:00.819 10:34:48 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.819 --rc genhtml_branch_coverage=1 00:07:00.819 --rc genhtml_function_coverage=1 00:07:00.819 --rc genhtml_legend=1 00:07:00.819 --rc geninfo_all_blocks=1 00:07:00.819 --rc geninfo_unexecuted_blocks=1 00:07:00.819 00:07:00.819 ' 00:07:00.819 10:34:48 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.819 --rc genhtml_branch_coverage=1 00:07:00.819 --rc genhtml_function_coverage=1 00:07:00.819 --rc genhtml_legend=1 00:07:00.819 --rc geninfo_all_blocks=1 00:07:00.819 --rc geninfo_unexecuted_blocks=1 00:07:00.819 00:07:00.819 ' 00:07:00.819 10:34:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:00.819 10:34:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1222205 00:07:00.819 10:34:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:00.819 10:34:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:00.819 10:34:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1222205 00:07:00.819 10:34:48 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1222205 ']' 00:07:00.819 10:34:48 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.819 10:34:48 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.819 10:34:48 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.819 10:34:48 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.819 10:34:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:00.819 [2024-11-19 10:34:48.264088] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:00.819 [2024-11-19 10:34:48.264170] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1222205 ] 00:07:00.819 [2024-11-19 10:34:48.331677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.819 [2024-11-19 10:34:48.394429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.819 [2024-11-19 10:34:48.394456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.819 [2024-11-19 10:34:48.394514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.819 [2024-11-19 10:34:48.394518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.078 10:34:48 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.078 10:34:48 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:01.078 10:34:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:01.078 10:34:48 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.078 10:34:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:01.078 [2024-11-19 10:34:48.515500] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:01.078 [2024-11-19 10:34:48.515529] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:01.078 [2024-11-19 10:34:48.515547] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:01.078 [2024-11-19 10:34:48.515558] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:01.078 [2024-11-19 10:34:48.515568] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:01.078 10:34:48 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.078 10:34:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:01.078 10:34:48 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.078 10:34:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:01.078 [2024-11-19 10:34:48.617591] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:01.078 10:34:48 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.078 10:34:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:01.078 10:34:48 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.078 10:34:48 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.078 10:34:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:01.078 ************************************ 00:07:01.078 START TEST scheduler_create_thread 00:07:01.078 ************************************ 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.078 2 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.078 3 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.078 4 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.078 5 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.078 6 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.078 7 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.078 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.336 8 00:07:01.336 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.336 10:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:01.336 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.336 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.336 9 00:07:01.336 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.336 10:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:01.336 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.336 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.336 10 00:07:01.336 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.336 10:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:01.336 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.336 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.336 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.336 10:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:01.336 10:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:01.336 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.336 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.336 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.336 10:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:01.336 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.337 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.337 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.337 10:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:01.337 10:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:01.337 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.337 10:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.941 10:34:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.941 00:07:01.941 real 0m0.590s 00:07:01.941 user 0m0.012s 00:07:01.941 sys 0m0.002s 00:07:01.941 10:34:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.941 10:34:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.941 ************************************ 00:07:01.941 END TEST scheduler_create_thread 00:07:01.941 ************************************ 00:07:01.941 10:34:49 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:01.941 10:34:49 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1222205 00:07:01.941 10:34:49 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1222205 ']' 00:07:01.941 10:34:49 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1222205 00:07:01.941 10:34:49 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:01.941 10:34:49 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.941 10:34:49 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1222205 00:07:01.941 10:34:49 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:01.941 10:34:49 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:01.941 10:34:49 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1222205' 00:07:01.941 killing process with pid 1222205 00:07:01.941 10:34:49 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1222205 00:07:01.941 10:34:49 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1222205 00:07:02.225 [2024-11-19 10:34:49.718001] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:02.484 00:07:02.484 real 0m1.844s 00:07:02.484 user 0m2.546s 00:07:02.484 sys 0m0.354s 00:07:02.484 10:34:49 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.484 10:34:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:02.484 ************************************ 00:07:02.484 END TEST event_scheduler 00:07:02.484 ************************************ 00:07:02.484 10:34:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:02.484 10:34:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:02.484 10:34:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.484 10:34:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.484 10:34:49 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.484 ************************************ 00:07:02.484 START TEST app_repeat 00:07:02.484 ************************************ 00:07:02.484 10:34:49 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:02.484 10:34:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.484 10:34:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.484 10:34:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:02.484 10:34:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:02.484 10:34:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:02.484 10:34:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:02.484 10:34:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:02.484 10:34:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1222398 00:07:02.484 10:34:49 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:02.484 10:34:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:02.484 10:34:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1222398' 00:07:02.484 Process app_repeat pid: 1222398 00:07:02.484 10:34:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:02.484 10:34:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:02.484 spdk_app_start Round 0 00:07:02.484 10:34:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1222398 /var/tmp/spdk-nbd.sock 00:07:02.484 10:34:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1222398 ']' 00:07:02.484 10:34:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:02.484 10:34:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.484 10:34:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:02.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:02.484 10:34:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.484 10:34:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:02.484 [2024-11-19 10:34:50.011455] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:02.484 [2024-11-19 10:34:50.011525] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1222398 ] 00:07:02.484 [2024-11-19 10:34:50.081859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.742 [2024-11-19 10:34:50.144807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.742 [2024-11-19 10:34:50.144813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.742 10:34:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.742 10:34:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:02.743 10:34:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:03.000 Malloc0 00:07:03.000 10:34:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:03.259 Malloc1 00:07:03.259 10:34:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.259 10:34:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.259 10:34:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.259 10:34:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:03.259 10:34:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.259 10:34:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:03.259 10:34:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.259 10:34:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.259 10:34:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.259 10:34:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:03.259 10:34:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.259 10:34:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:03.259 10:34:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:03.259 10:34:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:03.259 10:34:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.259 10:34:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:03.825 /dev/nbd0 00:07:03.825 10:34:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:03.825 10:34:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:03.826 10:34:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:03.826 10:34:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:03.826 10:34:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.826 10:34:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.826 10:34:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:03.826 10:34:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:03.826 10:34:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.826 10:34:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.826 10:34:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:03.826 1+0 records in 00:07:03.826 1+0 records out 00:07:03.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175466 s, 23.3 MB/s 00:07:03.826 10:34:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:03.826 10:34:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:03.826 10:34:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:03.826 10:34:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.826 10:34:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:03.826 10:34:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.826 10:34:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.826 10:34:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:04.083 /dev/nbd1 00:07:04.083 10:34:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:04.083 10:34:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:04.083 10:34:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:04.083 10:34:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:04.083 10:34:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.083 10:34:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.083 10:34:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:04.083 10:34:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:04.083 10:34:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.083 10:34:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.083 10:34:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.083 1+0 records in 00:07:04.083 1+0 records out 00:07:04.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022389 s, 18.3 MB/s 00:07:04.083 10:34:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.083 10:34:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:04.083 10:34:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.083 10:34:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.083 10:34:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:04.084 10:34:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.084 10:34:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.084 10:34:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.084 10:34:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.084 10:34:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:04.342 { 00:07:04.342 "nbd_device": "/dev/nbd0", 00:07:04.342 "bdev_name": "Malloc0" 00:07:04.342 }, 00:07:04.342 { 00:07:04.342 "nbd_device": "/dev/nbd1", 00:07:04.342 "bdev_name": "Malloc1" 00:07:04.342 } 00:07:04.342 ]' 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:04.342 { 00:07:04.342 "nbd_device": "/dev/nbd0", 00:07:04.342 "bdev_name": "Malloc0" 00:07:04.342 }, 00:07:04.342 { 00:07:04.342 "nbd_device": "/dev/nbd1", 00:07:04.342 "bdev_name": "Malloc1" 00:07:04.342 } 00:07:04.342 ]' 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:04.342 /dev/nbd1' 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:04.342 /dev/nbd1' 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:04.342 256+0 records in 00:07:04.342 256+0 records out 00:07:04.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00382437 s, 274 MB/s 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:04.342 256+0 records in 00:07:04.342 256+0 records out 00:07:04.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197002 s, 53.2 MB/s 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:04.342 256+0 records in 00:07:04.342 256+0 records out 00:07:04.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220146 s, 47.6 MB/s 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.342 10:34:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.600 10:34:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.600 10:34:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.600 10:34:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.600 10:34:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.600 10:34:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.600 10:34:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.600 10:34:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:04.600 10:34:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.600 10:34:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.600 10:34:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:05.172 10:34:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:05.172 10:34:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:05.172 10:34:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:05.172 10:34:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.172 10:34:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.172 10:34:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:05.172 10:34:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:05.172 10:34:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.172 10:34:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.172 10:34:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.172 10:34:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.172 10:34:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:05.172 10:34:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:05.172 10:34:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.429 10:34:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:05.429 10:34:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:05.429 10:34:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.429 10:34:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:05.429 10:34:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:05.429 10:34:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:05.429 10:34:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:05.429 10:34:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:05.429 10:34:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:05.429 10:34:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:05.687 10:34:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:05.687 [2024-11-19 10:34:53.300500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.945 [2024-11-19 10:34:53.356584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.945 [2024-11-19 10:34:53.356584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.945 [2024-11-19 10:34:53.416193] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:05.945 [2024-11-19 10:34:53.416257] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:08.471 10:34:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:08.472 10:34:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:08.472 spdk_app_start Round 1 00:07:08.472 10:34:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1222398 /var/tmp/spdk-nbd.sock 00:07:08.472 10:34:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1222398 ']' 00:07:08.729 10:34:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:08.729 10:34:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.729 10:34:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:08.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:08.729 10:34:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.729 10:34:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:08.987 10:34:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.987 10:34:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:08.987 10:34:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:09.245 Malloc0 00:07:09.245 10:34:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:09.503 Malloc1 00:07:09.503 10:34:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:09.503 10:34:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.503 10:34:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:09.503 10:34:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:09.503 10:34:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.503 10:34:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:09.503 10:34:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:09.503 10:34:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.503 10:34:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:09.503 10:34:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:09.503 10:34:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.503 10:34:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:09.503 10:34:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:09.503 10:34:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:09.503 10:34:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:09.503 10:34:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:09.761 /dev/nbd0 00:07:09.761 10:34:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:09.761 10:34:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:09.761 10:34:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:09.761 10:34:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:09.761 10:34:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.761 10:34:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.761 10:34:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:09.761 10:34:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:09.761 10:34:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.761 10:34:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.761 10:34:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:09.761 1+0 records in 00:07:09.761 1+0 records out 00:07:09.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245917 s, 16.7 MB/s 00:07:09.761 10:34:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:09.761 10:34:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:09.761 10:34:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:09.761 10:34:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.761 10:34:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:09.761 10:34:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.761 10:34:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:09.761 10:34:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:10.025 /dev/nbd1 00:07:10.025 10:34:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:10.025 10:34:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:10.025 10:34:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:10.025 10:34:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:10.025 10:34:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:10.025 10:34:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:10.025 10:34:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:10.025 10:34:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:10.025 10:34:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:10.025 10:34:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:10.025 10:34:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:10.025 1+0 records in 00:07:10.025 1+0 records out 00:07:10.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222755 s, 18.4 MB/s 00:07:10.025 10:34:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:10.025 10:34:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:10.025 10:34:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:10.025 10:34:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:10.025 10:34:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:10.025 10:34:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.025 10:34:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:10.025 10:34:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:10.025 10:34:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.025 10:34:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:10.283 10:34:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:10.283 { 00:07:10.283 "nbd_device": "/dev/nbd0", 00:07:10.283 "bdev_name": "Malloc0" 00:07:10.283 }, 00:07:10.283 { 00:07:10.283 "nbd_device": "/dev/nbd1", 00:07:10.283 "bdev_name": "Malloc1" 00:07:10.283 } 00:07:10.283 ]' 00:07:10.283 10:34:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:10.283 { 00:07:10.283 "nbd_device": "/dev/nbd0", 00:07:10.283 "bdev_name": "Malloc0" 00:07:10.283 }, 00:07:10.283 { 00:07:10.283 "nbd_device": "/dev/nbd1", 00:07:10.283 "bdev_name": "Malloc1" 00:07:10.283 } 00:07:10.283 ]' 00:07:10.283 10:34:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.283 10:34:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:10.283 /dev/nbd1' 00:07:10.283 10:34:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:10.283 /dev/nbd1' 00:07:10.283 10:34:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.283 10:34:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:10.283 10:34:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:10.283 10:34:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:10.283 10:34:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:10.283 10:34:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:10.283 10:34:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.283 10:34:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:10.283 10:34:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:10.283 10:34:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:10.283 10:34:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:10.283 10:34:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:10.283 256+0 records in 00:07:10.283 256+0 records out 00:07:10.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00539619 s, 194 MB/s 00:07:10.283 10:34:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.283 10:34:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:10.541 256+0 records in 00:07:10.541 256+0 records out 00:07:10.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209769 s, 50.0 MB/s 00:07:10.541 10:34:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.541 10:34:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:10.541 256+0 records in 00:07:10.541 256+0 records out 00:07:10.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220142 s, 47.6 MB/s 00:07:10.541 10:34:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:10.541 10:34:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.541 10:34:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:10.541 10:34:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:10.541 10:34:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:10.541 10:34:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:10.541 10:34:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:10.541 10:34:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.541 10:34:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:10.541 10:34:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.541 10:34:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:10.541 10:34:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:10.541 10:34:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:10.541 10:34:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.541 10:34:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.541 10:34:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.541 10:34:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:10.541 10:34:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.541 10:34:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:10.799 10:34:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.799 10:34:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.799 10:34:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.799 10:34:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.799 10:34:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.799 10:34:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.799 10:34:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:10.799 10:34:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.799 10:34:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.799 10:34:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:11.057 10:34:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:11.057 10:34:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:11.057 10:34:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:11.057 10:34:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.057 10:34:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.057 10:34:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:11.057 10:34:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:11.057 10:34:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.057 10:34:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.057 10:34:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.057 10:34:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.315 10:34:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:11.316 10:34:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:11.316 10:34:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.316 10:34:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:11.316 10:34:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:11.316 10:34:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.316 10:34:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:11.316 10:34:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:11.316 10:34:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:11.316 10:34:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:11.316 10:34:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:11.316 10:34:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:11.316 10:34:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:11.573 10:34:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:11.830 [2024-11-19 10:34:59.355417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:11.830 [2024-11-19 10:34:59.409940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.830 [2024-11-19 10:34:59.409940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.088 [2024-11-19 10:34:59.471421] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:12.088 [2024-11-19 10:34:59.471485] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:14.615 10:35:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:14.615 10:35:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:14.615 spdk_app_start Round 2 00:07:14.615 10:35:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1222398 /var/tmp/spdk-nbd.sock 00:07:14.615 10:35:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1222398 ']' 00:07:14.615 10:35:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:14.615 10:35:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.615 10:35:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:14.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:14.615 10:35:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.615 10:35:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:14.873 10:35:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.873 10:35:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:14.873 10:35:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:15.131 Malloc0 00:07:15.131 10:35:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:15.389 Malloc1 00:07:15.389 10:35:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:15.389 10:35:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.389 10:35:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:15.389 10:35:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:15.389 10:35:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.389 10:35:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:15.389 10:35:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:15.389 10:35:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.389 10:35:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:15.389 10:35:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:15.389 10:35:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.389 10:35:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:15.389 10:35:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:15.389 10:35:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:15.389 10:35:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:15.389 10:35:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:15.954 /dev/nbd0 00:07:15.954 10:35:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:15.954 10:35:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:15.954 10:35:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:15.954 10:35:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:15.954 10:35:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.954 10:35:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.954 10:35:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:15.954 10:35:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:15.954 10:35:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.954 10:35:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.954 10:35:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:15.954 1+0 records in 00:07:15.954 1+0 records out 00:07:15.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273584 s, 15.0 MB/s 00:07:15.954 10:35:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:15.954 10:35:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:15.954 10:35:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:15.954 10:35:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.954 10:35:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:15.954 10:35:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.954 10:35:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:15.954 10:35:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:16.212 /dev/nbd1 00:07:16.213 10:35:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:16.213 10:35:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:16.213 10:35:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:16.213 10:35:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:16.213 10:35:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.213 10:35:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.213 10:35:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:16.213 10:35:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:16.213 10:35:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.213 10:35:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.213 10:35:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:16.213 1+0 records in 00:07:16.213 1+0 records out 00:07:16.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182803 s, 22.4 MB/s 00:07:16.213 10:35:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:16.213 10:35:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:16.213 10:35:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:16.213 10:35:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.213 10:35:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:16.213 10:35:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.213 10:35:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:16.213 10:35:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:16.213 10:35:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.213 10:35:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:16.471 10:35:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:16.471 { 00:07:16.471 "nbd_device": "/dev/nbd0", 00:07:16.471 "bdev_name": "Malloc0" 00:07:16.471 }, 00:07:16.471 { 00:07:16.471 "nbd_device": "/dev/nbd1", 00:07:16.471 "bdev_name": "Malloc1" 00:07:16.471 } 00:07:16.471 ]' 00:07:16.471 10:35:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:16.471 { 00:07:16.471 "nbd_device": "/dev/nbd0", 00:07:16.471 "bdev_name": "Malloc0" 00:07:16.471 }, 00:07:16.471 { 00:07:16.471 "nbd_device": "/dev/nbd1", 00:07:16.471 "bdev_name": "Malloc1" 00:07:16.471 } 00:07:16.471 ]' 00:07:16.471 10:35:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:16.471 10:35:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:16.471 /dev/nbd1' 00:07:16.471 10:35:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:16.471 /dev/nbd1' 00:07:16.471 10:35:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:16.471 10:35:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:16.471 10:35:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:16.471 10:35:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:16.471 10:35:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:16.471 10:35:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:16.471 10:35:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.471 10:35:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:16.471 10:35:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:16.471 10:35:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:16.471 10:35:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:16.471 10:35:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:16.471 256+0 records in 00:07:16.471 256+0 records out 00:07:16.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515102 s, 204 MB/s 00:07:16.471 10:35:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.471 10:35:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:16.471 256+0 records in 00:07:16.471 256+0 records out 00:07:16.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198901 s, 52.7 MB/s 00:07:16.471 10:35:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.471 10:35:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:16.471 256+0 records in 00:07:16.471 256+0 records out 00:07:16.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216991 s, 48.3 MB/s 00:07:16.471 10:35:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:16.471 10:35:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.471 10:35:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:16.471 10:35:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:16.471 10:35:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:16.471 10:35:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:16.471 10:35:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:16.471 10:35:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.471 10:35:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:16.471 10:35:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.471 10:35:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:16.471 10:35:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:16.471 10:35:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:16.471 10:35:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.471 10:35:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.471 10:35:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:16.471 10:35:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:16.471 10:35:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.472 10:35:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:16.730 10:35:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:16.730 10:35:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:16.730 10:35:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:16.730 10:35:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.730 10:35:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.730 10:35:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:16.730 10:35:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:16.730 10:35:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.730 10:35:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.730 10:35:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:16.989 10:35:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:16.989 10:35:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:16.989 10:35:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:16.989 10:35:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.989 10:35:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.989 10:35:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:16.989 10:35:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:16.989 10:35:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.989 10:35:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:16.989 10:35:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.989 10:35:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.555 10:35:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:17.555 10:35:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:17.555 10:35:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.555 10:35:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:17.555 10:35:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:17.555 10:35:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.555 10:35:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:17.555 10:35:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:17.555 10:35:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:17.555 10:35:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:17.555 10:35:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:17.555 10:35:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:17.555 10:35:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:17.813 10:35:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:18.071 [2024-11-19 10:35:05.439502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:18.071 [2024-11-19 10:35:05.494529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.071 [2024-11-19 10:35:05.494534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.071 [2024-11-19 10:35:05.554640] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:18.071 [2024-11-19 10:35:05.554704] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:21.351 10:35:08 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1222398 /var/tmp/spdk-nbd.sock 00:07:21.351 10:35:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1222398 ']' 00:07:21.351 10:35:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:21.351 10:35:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.351 10:35:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:21.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:21.351 10:35:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.351 10:35:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:21.351 10:35:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.351 10:35:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:21.351 10:35:08 event.app_repeat -- event/event.sh@39 -- # killprocess 1222398 00:07:21.351 10:35:08 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1222398 ']' 00:07:21.351 10:35:08 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1222398 00:07:21.351 10:35:08 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:21.351 10:35:08 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.351 10:35:08 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1222398 00:07:21.351 10:35:08 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.351 10:35:08 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.351 10:35:08 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1222398' 00:07:21.351 killing process with pid 1222398 00:07:21.351 10:35:08 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1222398 00:07:21.351 10:35:08 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1222398 00:07:21.351 spdk_app_start is called in Round 0. 00:07:21.351 Shutdown signal received, stop current app iteration 00:07:21.351 Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 reinitialization... 00:07:21.351 spdk_app_start is called in Round 1. 00:07:21.351 Shutdown signal received, stop current app iteration 00:07:21.351 Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 reinitialization... 00:07:21.351 spdk_app_start is called in Round 2. 00:07:21.351 Shutdown signal received, stop current app iteration 00:07:21.351 Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 reinitialization... 00:07:21.351 spdk_app_start is called in Round 3. 00:07:21.351 Shutdown signal received, stop current app iteration 00:07:21.351 10:35:08 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:21.351 10:35:08 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:21.351 00:07:21.351 real 0m18.739s 00:07:21.351 user 0m41.373s 00:07:21.351 sys 0m3.178s 00:07:21.351 10:35:08 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.351 10:35:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:21.351 ************************************ 00:07:21.351 END TEST app_repeat 00:07:21.351 ************************************ 00:07:21.351 10:35:08 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:21.351 10:35:08 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:21.351 10:35:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.351 10:35:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.351 10:35:08 event -- common/autotest_common.sh@10 -- # set +x 00:07:21.351 ************************************ 00:07:21.351 START TEST cpu_locks 00:07:21.351 ************************************ 00:07:21.351 10:35:08 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:21.351 * Looking for test storage... 00:07:21.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:21.351 10:35:08 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:21.351 10:35:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:21.351 10:35:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:21.351 10:35:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.351 10:35:08 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:21.351 10:35:08 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.351 10:35:08 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:21.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.351 --rc genhtml_branch_coverage=1 00:07:21.351 --rc genhtml_function_coverage=1 00:07:21.351 --rc genhtml_legend=1 00:07:21.351 --rc geninfo_all_blocks=1 00:07:21.351 --rc geninfo_unexecuted_blocks=1 00:07:21.351 00:07:21.351 ' 00:07:21.351 10:35:08 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:21.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.351 --rc genhtml_branch_coverage=1 00:07:21.351 --rc genhtml_function_coverage=1 00:07:21.351 --rc genhtml_legend=1 00:07:21.351 --rc geninfo_all_blocks=1 00:07:21.351 --rc geninfo_unexecuted_blocks=1 00:07:21.351 00:07:21.351 ' 00:07:21.351 10:35:08 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:21.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.351 --rc genhtml_branch_coverage=1 00:07:21.351 --rc genhtml_function_coverage=1 00:07:21.351 --rc genhtml_legend=1 00:07:21.351 --rc geninfo_all_blocks=1 00:07:21.351 --rc geninfo_unexecuted_blocks=1 00:07:21.351 00:07:21.351 ' 00:07:21.351 10:35:08 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:21.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.351 --rc genhtml_branch_coverage=1 00:07:21.351 --rc genhtml_function_coverage=1 00:07:21.351 --rc genhtml_legend=1 00:07:21.351 --rc geninfo_all_blocks=1 00:07:21.351 --rc geninfo_unexecuted_blocks=1 00:07:21.351 00:07:21.351 ' 00:07:21.351 10:35:08 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:21.351 10:35:08 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:21.351 10:35:08 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:21.351 10:35:08 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:21.351 10:35:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.351 10:35:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.351 10:35:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.351 ************************************ 00:07:21.351 START TEST default_locks 00:07:21.351 ************************************ 00:07:21.351 10:35:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:21.351 10:35:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1224884 00:07:21.351 10:35:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:21.351 10:35:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1224884 00:07:21.351 10:35:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1224884 ']' 00:07:21.351 10:35:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.351 10:35:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.351 10:35:08 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.352 10:35:08 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.352 10:35:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.610 [2024-11-19 10:35:09.005776] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:21.610 [2024-11-19 10:35:09.005855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224884 ] 00:07:21.610 [2024-11-19 10:35:09.072616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.610 [2024-11-19 10:35:09.133980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.868 10:35:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.868 10:35:09 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:21.868 10:35:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1224884 00:07:21.868 10:35:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1224884 00:07:21.868 10:35:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:22.126 lslocks: write error 00:07:22.126 10:35:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1224884 00:07:22.126 10:35:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1224884 ']' 00:07:22.126 10:35:09 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1224884 00:07:22.126 10:35:09 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:22.126 10:35:09 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.126 10:35:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1224884 00:07:22.126 10:35:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.126 10:35:09 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.126 10:35:09 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1224884' 00:07:22.126 killing process with pid 1224884 00:07:22.126 10:35:09 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1224884 00:07:22.126 10:35:09 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1224884 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1224884 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1224884 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1224884 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1224884 ']' 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1224884) - No such process 00:07:22.692 ERROR: process (pid: 1224884) is no longer running 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:22.692 00:07:22.692 real 0m1.162s 00:07:22.692 user 0m1.113s 00:07:22.692 sys 0m0.515s 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.692 10:35:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.692 ************************************ 00:07:22.692 END TEST default_locks 00:07:22.692 ************************************ 00:07:22.692 10:35:10 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:22.692 10:35:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.692 10:35:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.692 10:35:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.692 ************************************ 00:07:22.692 START TEST default_locks_via_rpc 00:07:22.692 ************************************ 00:07:22.692 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:22.692 10:35:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1225050 00:07:22.692 10:35:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:22.692 10:35:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1225050 00:07:22.692 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1225050 ']' 00:07:22.692 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.692 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.692 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.692 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.692 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.693 [2024-11-19 10:35:10.217467] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:22.693 [2024-11-19 10:35:10.217555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225050 ] 00:07:22.693 [2024-11-19 10:35:10.289202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.950 [2024-11-19 10:35:10.352384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.208 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.208 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:23.208 10:35:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:23.208 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.208 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.208 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.208 10:35:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:23.208 10:35:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:23.208 10:35:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:23.208 10:35:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:23.208 10:35:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:23.208 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.208 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.208 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.208 10:35:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1225050 00:07:23.208 10:35:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1225050 00:07:23.208 10:35:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:23.466 10:35:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1225050 00:07:23.466 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1225050 ']' 00:07:23.466 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1225050 00:07:23.466 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:23.466 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.466 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1225050 00:07:23.466 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.466 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.466 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1225050' 00:07:23.466 killing process with pid 1225050 00:07:23.466 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1225050 00:07:23.466 10:35:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1225050 00:07:23.779 00:07:23.779 real 0m1.214s 00:07:23.779 user 0m1.187s 00:07:23.779 sys 0m0.506s 00:07:23.779 10:35:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.779 10:35:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.779 ************************************ 00:07:23.779 END TEST default_locks_via_rpc 00:07:23.779 ************************************ 00:07:23.779 10:35:11 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:23.779 10:35:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.779 10:35:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.779 10:35:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.036 ************************************ 00:07:24.036 START TEST non_locking_app_on_locked_coremask 00:07:24.036 ************************************ 00:07:24.036 10:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:24.036 10:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1225322 00:07:24.036 10:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:24.036 10:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1225322 /var/tmp/spdk.sock 00:07:24.036 10:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1225322 ']' 00:07:24.036 10:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.036 10:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.036 10:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.036 10:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.036 10:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.036 [2024-11-19 10:35:11.476831] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:24.036 [2024-11-19 10:35:11.476915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225322 ] 00:07:24.036 [2024-11-19 10:35:11.544348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.036 [2024-11-19 10:35:11.600152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.293 10:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.293 10:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:24.293 10:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1225342 00:07:24.293 10:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:24.293 10:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1225342 /var/tmp/spdk2.sock 00:07:24.293 10:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1225342 ']' 00:07:24.293 10:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:24.293 10:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.293 10:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:24.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:24.293 10:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.293 10:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.551 [2024-11-19 10:35:11.915658] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:24.551 [2024-11-19 10:35:11.915763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225342 ] 00:07:24.551 [2024-11-19 10:35:12.013906] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:24.551 [2024-11-19 10:35:12.013944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.551 [2024-11-19 10:35:12.126250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.483 10:35:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.483 10:35:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:25.483 10:35:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1225322 00:07:25.483 10:35:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1225322 00:07:25.483 10:35:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:25.741 lslocks: write error 00:07:25.741 10:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1225322 00:07:25.741 10:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1225322 ']' 00:07:25.741 10:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1225322 00:07:25.741 10:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:25.741 10:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.741 10:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1225322 00:07:25.741 10:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.741 10:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.741 10:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1225322' 00:07:25.741 killing process with pid 1225322 00:07:25.741 10:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1225322 00:07:25.741 10:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1225322 00:07:26.674 10:35:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1225342 00:07:26.674 10:35:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1225342 ']' 00:07:26.674 10:35:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1225342 00:07:26.674 10:35:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:26.674 10:35:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.674 10:35:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1225342 00:07:26.674 10:35:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.674 10:35:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.674 10:35:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1225342' 00:07:26.674 killing process with pid 1225342 00:07:26.674 10:35:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1225342 00:07:26.674 10:35:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1225342 00:07:27.239 00:07:27.239 real 0m3.184s 00:07:27.239 user 0m3.395s 00:07:27.239 sys 0m1.048s 00:07:27.239 10:35:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.239 10:35:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.239 ************************************ 00:07:27.239 END TEST non_locking_app_on_locked_coremask 00:07:27.239 ************************************ 00:07:27.239 10:35:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:27.239 10:35:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.239 10:35:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.239 10:35:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.239 ************************************ 00:07:27.239 START TEST locking_app_on_unlocked_coremask 00:07:27.239 ************************************ 00:07:27.239 10:35:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:27.239 10:35:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1225652 00:07:27.239 10:35:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:27.239 10:35:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1225652 /var/tmp/spdk.sock 00:07:27.239 10:35:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1225652 ']' 00:07:27.239 10:35:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.239 10:35:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.239 10:35:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.239 10:35:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.239 10:35:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.239 [2024-11-19 10:35:14.711109] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:27.239 [2024-11-19 10:35:14.711202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225652 ] 00:07:27.239 [2024-11-19 10:35:14.777943] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:27.239 [2024-11-19 10:35:14.777979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.239 [2024-11-19 10:35:14.837277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.498 10:35:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.498 10:35:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:27.498 10:35:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1225776 00:07:27.498 10:35:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:27.498 10:35:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1225776 /var/tmp/spdk2.sock 00:07:27.498 10:35:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1225776 ']' 00:07:27.498 10:35:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:27.498 10:35:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.498 10:35:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:27.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:27.498 10:35:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.498 10:35:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.756 [2024-11-19 10:35:15.152600] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:27.756 [2024-11-19 10:35:15.152699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225776 ] 00:07:27.756 [2024-11-19 10:35:15.249785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.756 [2024-11-19 10:35:15.362197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.710 10:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.710 10:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:28.710 10:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1225776 00:07:28.710 10:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1225776 00:07:28.710 10:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:29.008 lslocks: write error 00:07:29.008 10:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1225652 00:07:29.008 10:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1225652 ']' 00:07:29.008 10:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1225652 00:07:29.008 10:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:29.008 10:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.008 10:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1225652 00:07:29.008 10:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.008 10:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.008 10:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1225652' 00:07:29.008 killing process with pid 1225652 00:07:29.008 10:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1225652 00:07:29.008 10:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1225652 00:07:29.942 10:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1225776 00:07:29.942 10:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1225776 ']' 00:07:29.942 10:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1225776 00:07:29.942 10:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:29.942 10:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.942 10:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1225776 00:07:29.942 10:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.942 10:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.942 10:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1225776' 00:07:29.942 killing process with pid 1225776 00:07:29.942 10:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1225776 00:07:29.942 10:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1225776 00:07:30.508 00:07:30.508 real 0m3.186s 00:07:30.508 user 0m3.435s 00:07:30.508 sys 0m1.031s 00:07:30.508 10:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.508 10:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.508 ************************************ 00:07:30.508 END TEST locking_app_on_unlocked_coremask 00:07:30.508 ************************************ 00:07:30.508 10:35:17 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:30.508 10:35:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.508 10:35:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.508 10:35:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.508 ************************************ 00:07:30.508 START TEST locking_app_on_locked_coremask 00:07:30.509 ************************************ 00:07:30.509 10:35:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:30.509 10:35:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1226089 00:07:30.509 10:35:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:30.509 10:35:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1226089 /var/tmp/spdk.sock 00:07:30.509 10:35:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1226089 ']' 00:07:30.509 10:35:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.509 10:35:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.509 10:35:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.509 10:35:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.509 10:35:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.509 [2024-11-19 10:35:17.951793] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:30.509 [2024-11-19 10:35:17.951883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226089 ] 00:07:30.509 [2024-11-19 10:35:18.016297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.509 [2024-11-19 10:35:18.076841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.767 10:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.767 10:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:30.767 10:35:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1226214 00:07:30.767 10:35:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:30.767 10:35:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1226214 /var/tmp/spdk2.sock 00:07:30.767 10:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:30.767 10:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1226214 /var/tmp/spdk2.sock 00:07:30.767 10:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:30.767 10:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.767 10:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:30.767 10:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.767 10:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1226214 /var/tmp/spdk2.sock 00:07:30.767 10:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1226214 ']' 00:07:30.767 10:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:30.767 10:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.767 10:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:30.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:30.767 10:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.767 10:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.024 [2024-11-19 10:35:18.405401] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:31.024 [2024-11-19 10:35:18.405502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226214 ] 00:07:31.024 [2024-11-19 10:35:18.504922] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1226089 has claimed it. 00:07:31.024 [2024-11-19 10:35:18.504975] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:31.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1226214) - No such process 00:07:31.589 ERROR: process (pid: 1226214) is no longer running 00:07:31.589 10:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.589 10:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:31.589 10:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:31.589 10:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:31.589 10:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:31.589 10:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:31.589 10:35:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1226089 00:07:31.589 10:35:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1226089 00:07:31.589 10:35:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:31.848 lslocks: write error 00:07:31.848 10:35:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1226089 00:07:31.848 10:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1226089 ']' 00:07:31.848 10:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1226089 00:07:31.848 10:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:31.848 10:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.848 10:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1226089 00:07:32.104 10:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.104 10:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.104 10:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1226089' 00:07:32.104 killing process with pid 1226089 00:07:32.104 10:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1226089 00:07:32.104 10:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1226089 00:07:32.364 00:07:32.364 real 0m2.028s 00:07:32.364 user 0m2.233s 00:07:32.364 sys 0m0.639s 00:07:32.364 10:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.364 10:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.364 ************************************ 00:07:32.364 END TEST locking_app_on_locked_coremask 00:07:32.364 ************************************ 00:07:32.364 10:35:19 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:32.364 10:35:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.364 10:35:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.364 10:35:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.364 ************************************ 00:07:32.364 START TEST locking_overlapped_coremask 00:07:32.364 ************************************ 00:07:32.364 10:35:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:32.364 10:35:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1226384 00:07:32.364 10:35:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:32.364 10:35:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1226384 /var/tmp/spdk.sock 00:07:32.364 10:35:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1226384 ']' 00:07:32.364 10:35:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.364 10:35:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.364 10:35:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.364 10:35:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.364 10:35:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.622 [2024-11-19 10:35:20.035671] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:32.622 [2024-11-19 10:35:20.035780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226384 ] 00:07:32.622 [2024-11-19 10:35:20.107004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.622 [2024-11-19 10:35:20.171733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.622 [2024-11-19 10:35:20.173324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.622 [2024-11-19 10:35:20.173337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.881 10:35:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.881 10:35:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:32.881 10:35:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1226465 00:07:32.881 10:35:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:32.881 10:35:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1226465 /var/tmp/spdk2.sock 00:07:32.881 10:35:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:32.881 10:35:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1226465 /var/tmp/spdk2.sock 00:07:32.881 10:35:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:32.881 10:35:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.881 10:35:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:32.881 10:35:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.881 10:35:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1226465 /var/tmp/spdk2.sock 00:07:32.881 10:35:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1226465 ']' 00:07:32.881 10:35:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:32.881 10:35:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.881 10:35:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:32.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:32.881 10:35:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.881 10:35:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.139 [2024-11-19 10:35:20.519018] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:33.139 [2024-11-19 10:35:20.519105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226465 ] 00:07:33.139 [2024-11-19 10:35:20.629192] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1226384 has claimed it. 00:07:33.139 [2024-11-19 10:35:20.629255] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:33.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1226465) - No such process 00:07:33.704 ERROR: process (pid: 1226465) is no longer running 00:07:33.704 10:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.704 10:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:33.704 10:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:33.704 10:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:33.704 10:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:33.704 10:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:33.704 10:35:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:33.704 10:35:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:33.704 10:35:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:33.704 10:35:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:33.704 10:35:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1226384 00:07:33.704 10:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1226384 ']' 00:07:33.704 10:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1226384 00:07:33.704 10:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:33.704 10:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.705 10:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1226384 00:07:33.705 10:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.705 10:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.705 10:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1226384' 00:07:33.705 killing process with pid 1226384 00:07:33.705 10:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1226384 00:07:33.705 10:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1226384 00:07:34.271 00:07:34.271 real 0m1.709s 00:07:34.271 user 0m4.726s 00:07:34.271 sys 0m0.482s 00:07:34.271 10:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.271 10:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:34.271 ************************************ 00:07:34.271 END TEST locking_overlapped_coremask 00:07:34.271 ************************************ 00:07:34.271 10:35:21 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:34.271 10:35:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.271 10:35:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.271 10:35:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:34.271 ************************************ 00:07:34.271 START TEST locking_overlapped_coremask_via_rpc 00:07:34.271 ************************************ 00:07:34.271 10:35:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:34.271 10:35:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1226681 00:07:34.271 10:35:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:34.271 10:35:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1226681 /var/tmp/spdk.sock 00:07:34.271 10:35:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1226681 ']' 00:07:34.271 10:35:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.271 10:35:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.271 10:35:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.271 10:35:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.271 10:35:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.271 [2024-11-19 10:35:21.791042] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:34.271 [2024-11-19 10:35:21.791136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226681 ] 00:07:34.272 [2024-11-19 10:35:21.856730] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:34.272 [2024-11-19 10:35:21.856765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:34.530 [2024-11-19 10:35:21.921098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.530 [2024-11-19 10:35:21.921162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.530 [2024-11-19 10:35:21.921165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.788 10:35:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.788 10:35:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:34.788 10:35:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1226687 00:07:34.788 10:35:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1226687 /var/tmp/spdk2.sock 00:07:34.788 10:35:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1226687 ']' 00:07:34.788 10:35:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:34.788 10:35:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.788 10:35:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:34.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:34.788 10:35:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.788 10:35:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:34.788 10:35:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.788 [2024-11-19 10:35:22.262979] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:34.788 [2024-11-19 10:35:22.263069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226687 ] 00:07:34.788 [2024-11-19 10:35:22.366869] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:34.788 [2024-11-19 10:35:22.366902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:35.046 [2024-11-19 10:35:22.489229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.046 [2024-11-19 10:35:22.492401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:35.046 [2024-11-19 10:35:22.492404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.980 [2024-11-19 10:35:23.254405] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1226681 has claimed it. 00:07:35.980 request: 00:07:35.980 { 00:07:35.980 "method": "framework_enable_cpumask_locks", 00:07:35.980 "req_id": 1 00:07:35.980 } 00:07:35.980 Got JSON-RPC error response 00:07:35.980 response: 00:07:35.980 { 00:07:35.980 "code": -32603, 00:07:35.980 "message": "Failed to claim CPU core: 2" 00:07:35.980 } 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1226681 /var/tmp/spdk.sock 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1226681 ']' 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1226687 /var/tmp/spdk2.sock 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1226687 ']' 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:35.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.980 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.238 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.238 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:36.238 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:36.238 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:36.238 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:36.238 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:36.238 00:07:36.238 real 0m2.071s 00:07:36.238 user 0m1.142s 00:07:36.238 sys 0m0.179s 00:07:36.238 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.238 10:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.238 ************************************ 00:07:36.238 END TEST locking_overlapped_coremask_via_rpc 00:07:36.238 ************************************ 00:07:36.238 10:35:23 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:36.238 10:35:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1226681 ]] 00:07:36.238 10:35:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1226681 00:07:36.238 10:35:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1226681 ']' 00:07:36.238 10:35:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1226681 00:07:36.238 10:35:23 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:36.238 10:35:23 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.238 10:35:23 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1226681 00:07:36.496 10:35:23 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:36.496 10:35:23 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:36.496 10:35:23 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1226681' 00:07:36.496 killing process with pid 1226681 00:07:36.496 10:35:23 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1226681 00:07:36.496 10:35:23 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1226681 00:07:36.753 10:35:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1226687 ]] 00:07:36.753 10:35:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1226687 00:07:36.753 10:35:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1226687 ']' 00:07:36.753 10:35:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1226687 00:07:36.753 10:35:24 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:36.753 10:35:24 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.753 10:35:24 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1226687 00:07:36.753 10:35:24 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:36.753 10:35:24 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:36.753 10:35:24 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1226687' 00:07:36.753 killing process with pid 1226687 00:07:36.753 10:35:24 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1226687 00:07:36.753 10:35:24 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1226687 00:07:37.319 10:35:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:37.319 10:35:24 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:37.319 10:35:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1226681 ]] 00:07:37.319 10:35:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1226681 00:07:37.319 10:35:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1226681 ']' 00:07:37.319 10:35:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1226681 00:07:37.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1226681) - No such process 00:07:37.319 10:35:24 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1226681 is not found' 00:07:37.319 Process with pid 1226681 is not found 00:07:37.319 10:35:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1226687 ]] 00:07:37.319 10:35:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1226687 00:07:37.319 10:35:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1226687 ']' 00:07:37.319 10:35:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1226687 00:07:37.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1226687) - No such process 00:07:37.319 10:35:24 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1226687 is not found' 00:07:37.319 Process with pid 1226687 is not found 00:07:37.319 10:35:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:37.319 00:07:37.319 real 0m15.998s 00:07:37.319 user 0m28.981s 00:07:37.319 sys 0m5.368s 00:07:37.319 10:35:24 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.319 10:35:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:37.319 ************************************ 00:07:37.319 END TEST cpu_locks 00:07:37.319 ************************************ 00:07:37.319 00:07:37.319 real 0m40.629s 00:07:37.319 user 1m19.508s 00:07:37.319 sys 0m9.356s 00:07:37.319 10:35:24 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.319 10:35:24 event -- common/autotest_common.sh@10 -- # set +x 00:07:37.319 ************************************ 00:07:37.319 END TEST event 00:07:37.319 ************************************ 00:07:37.319 10:35:24 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:37.319 10:35:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.320 10:35:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.320 10:35:24 -- common/autotest_common.sh@10 -- # set +x 00:07:37.320 ************************************ 00:07:37.320 START TEST thread 00:07:37.320 ************************************ 00:07:37.320 10:35:24 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:37.320 * Looking for test storage... 00:07:37.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:37.320 10:35:24 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:37.320 10:35:24 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:37.320 10:35:24 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:37.579 10:35:24 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:37.579 10:35:24 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.579 10:35:24 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.579 10:35:24 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.579 10:35:24 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.579 10:35:24 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.579 10:35:24 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.579 10:35:24 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.579 10:35:24 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.579 10:35:24 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.579 10:35:24 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.579 10:35:24 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.579 10:35:24 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:37.579 10:35:24 thread -- scripts/common.sh@345 -- # : 1 00:07:37.579 10:35:24 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.579 10:35:24 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.579 10:35:24 thread -- scripts/common.sh@365 -- # decimal 1 00:07:37.579 10:35:24 thread -- scripts/common.sh@353 -- # local d=1 00:07:37.579 10:35:24 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.579 10:35:24 thread -- scripts/common.sh@355 -- # echo 1 00:07:37.579 10:35:24 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.579 10:35:24 thread -- scripts/common.sh@366 -- # decimal 2 00:07:37.579 10:35:25 thread -- scripts/common.sh@353 -- # local d=2 00:07:37.579 10:35:25 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.579 10:35:25 thread -- scripts/common.sh@355 -- # echo 2 00:07:37.579 10:35:25 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.579 10:35:25 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.579 10:35:25 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.579 10:35:25 thread -- scripts/common.sh@368 -- # return 0 00:07:37.579 10:35:25 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.579 10:35:25 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:37.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.579 --rc genhtml_branch_coverage=1 00:07:37.579 --rc genhtml_function_coverage=1 00:07:37.579 --rc genhtml_legend=1 00:07:37.579 --rc geninfo_all_blocks=1 00:07:37.579 --rc geninfo_unexecuted_blocks=1 00:07:37.579 00:07:37.579 ' 00:07:37.579 10:35:25 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:37.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.579 --rc genhtml_branch_coverage=1 00:07:37.579 --rc genhtml_function_coverage=1 00:07:37.579 --rc genhtml_legend=1 00:07:37.579 --rc geninfo_all_blocks=1 00:07:37.579 --rc geninfo_unexecuted_blocks=1 00:07:37.579 00:07:37.579 ' 00:07:37.579 10:35:25 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:37.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.579 --rc genhtml_branch_coverage=1 00:07:37.579 --rc genhtml_function_coverage=1 00:07:37.579 --rc genhtml_legend=1 00:07:37.579 --rc geninfo_all_blocks=1 00:07:37.579 --rc geninfo_unexecuted_blocks=1 00:07:37.579 00:07:37.579 ' 00:07:37.579 10:35:25 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:37.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.579 --rc genhtml_branch_coverage=1 00:07:37.579 --rc genhtml_function_coverage=1 00:07:37.579 --rc genhtml_legend=1 00:07:37.579 --rc geninfo_all_blocks=1 00:07:37.579 --rc geninfo_unexecuted_blocks=1 00:07:37.579 00:07:37.579 ' 00:07:37.579 10:35:25 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:37.579 10:35:25 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:37.579 10:35:25 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.579 10:35:25 thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.579 ************************************ 00:07:37.579 START TEST thread_poller_perf 00:07:37.579 ************************************ 00:07:37.579 10:35:25 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:37.579 [2024-11-19 10:35:25.046058] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:37.579 [2024-11-19 10:35:25.046124] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1227184 ] 00:07:37.579 [2024-11-19 10:35:25.113414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.579 [2024-11-19 10:35:25.170109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.579 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:38.953 [2024-11-19T09:35:26.576Z] ====================================== 00:07:38.953 [2024-11-19T09:35:26.576Z] busy:2712654984 (cyc) 00:07:38.953 [2024-11-19T09:35:26.576Z] total_run_count: 366000 00:07:38.953 [2024-11-19T09:35:26.576Z] tsc_hz: 2700000000 (cyc) 00:07:38.953 [2024-11-19T09:35:26.576Z] ====================================== 00:07:38.953 [2024-11-19T09:35:26.576Z] poller_cost: 7411 (cyc), 2744 (nsec) 00:07:38.953 00:07:38.953 real 0m1.207s 00:07:38.953 user 0m1.133s 00:07:38.953 sys 0m0.069s 00:07:38.953 10:35:26 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.953 10:35:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:38.953 ************************************ 00:07:38.953 END TEST thread_poller_perf 00:07:38.953 ************************************ 00:07:38.953 10:35:26 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:38.953 10:35:26 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:38.953 10:35:26 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.953 10:35:26 thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.953 ************************************ 00:07:38.953 START TEST thread_poller_perf 00:07:38.953 ************************************ 00:07:38.953 10:35:26 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:38.953 [2024-11-19 10:35:26.302896] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:38.953 [2024-11-19 10:35:26.302961] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1227342 ] 00:07:38.953 [2024-11-19 10:35:26.369120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.953 [2024-11-19 10:35:26.426767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.953 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:39.887 [2024-11-19T09:35:27.510Z] ====================================== 00:07:39.887 [2024-11-19T09:35:27.510Z] busy:2702150841 (cyc) 00:07:39.887 [2024-11-19T09:35:27.510Z] total_run_count: 4858000 00:07:39.888 [2024-11-19T09:35:27.511Z] tsc_hz: 2700000000 (cyc) 00:07:39.888 [2024-11-19T09:35:27.511Z] ====================================== 00:07:39.888 [2024-11-19T09:35:27.511Z] poller_cost: 556 (cyc), 205 (nsec) 00:07:39.888 00:07:39.888 real 0m1.202s 00:07:39.888 user 0m1.139s 00:07:39.888 sys 0m0.058s 00:07:39.888 10:35:27 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.888 10:35:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:39.888 ************************************ 00:07:39.888 END TEST thread_poller_perf 00:07:39.888 ************************************ 00:07:40.146 10:35:27 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:40.146 00:07:40.146 real 0m2.664s 00:07:40.146 user 0m2.398s 00:07:40.146 sys 0m0.272s 00:07:40.146 10:35:27 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.146 10:35:27 thread -- common/autotest_common.sh@10 -- # set +x 00:07:40.146 ************************************ 00:07:40.146 END TEST thread 00:07:40.146 ************************************ 00:07:40.146 10:35:27 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:40.146 10:35:27 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:40.146 10:35:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.146 10:35:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.146 10:35:27 -- common/autotest_common.sh@10 -- # set +x 00:07:40.146 ************************************ 00:07:40.146 START TEST app_cmdline 00:07:40.146 ************************************ 00:07:40.146 10:35:27 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:40.146 * Looking for test storage... 00:07:40.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:40.146 10:35:27 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:40.146 10:35:27 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:40.146 10:35:27 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:40.146 10:35:27 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.146 10:35:27 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:40.146 10:35:27 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.146 10:35:27 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:40.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.146 --rc genhtml_branch_coverage=1 00:07:40.146 --rc genhtml_function_coverage=1 00:07:40.146 --rc genhtml_legend=1 00:07:40.146 --rc geninfo_all_blocks=1 00:07:40.146 --rc geninfo_unexecuted_blocks=1 00:07:40.146 00:07:40.146 ' 00:07:40.146 10:35:27 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:40.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.146 --rc genhtml_branch_coverage=1 00:07:40.146 --rc genhtml_function_coverage=1 00:07:40.146 --rc genhtml_legend=1 00:07:40.146 --rc geninfo_all_blocks=1 00:07:40.146 --rc geninfo_unexecuted_blocks=1 00:07:40.146 00:07:40.146 ' 00:07:40.146 10:35:27 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:40.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.147 --rc genhtml_branch_coverage=1 00:07:40.147 --rc genhtml_function_coverage=1 00:07:40.147 --rc genhtml_legend=1 00:07:40.147 --rc geninfo_all_blocks=1 00:07:40.147 --rc geninfo_unexecuted_blocks=1 00:07:40.147 00:07:40.147 ' 00:07:40.147 10:35:27 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:40.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.147 --rc genhtml_branch_coverage=1 00:07:40.147 --rc genhtml_function_coverage=1 00:07:40.147 --rc genhtml_legend=1 00:07:40.147 --rc geninfo_all_blocks=1 00:07:40.147 --rc geninfo_unexecuted_blocks=1 00:07:40.147 00:07:40.147 ' 00:07:40.147 10:35:27 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:40.147 10:35:27 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1227548 00:07:40.147 10:35:27 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:40.147 10:35:27 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1227548 00:07:40.147 10:35:27 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1227548 ']' 00:07:40.147 10:35:27 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.147 10:35:27 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.147 10:35:27 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.147 10:35:27 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.147 10:35:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:40.147 [2024-11-19 10:35:27.765632] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:40.147 [2024-11-19 10:35:27.765723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1227548 ] 00:07:40.405 [2024-11-19 10:35:27.830156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.405 [2024-11-19 10:35:27.887187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.663 10:35:28 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.663 10:35:28 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:40.663 10:35:28 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:40.921 { 00:07:40.921 "version": "SPDK v25.01-pre git sha1 53ca6a885", 00:07:40.921 "fields": { 00:07:40.921 "major": 25, 00:07:40.921 "minor": 1, 00:07:40.921 "patch": 0, 00:07:40.921 "suffix": "-pre", 00:07:40.921 "commit": "53ca6a885" 00:07:40.921 } 00:07:40.921 } 00:07:40.921 10:35:28 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:40.921 10:35:28 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:40.921 10:35:28 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:40.921 10:35:28 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:40.921 10:35:28 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:40.921 10:35:28 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:40.921 10:35:28 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.921 10:35:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:40.921 10:35:28 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:40.921 10:35:28 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.921 10:35:28 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:40.921 10:35:28 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:40.921 10:35:28 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:40.921 10:35:28 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:40.921 10:35:28 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:40.921 10:35:28 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.921 10:35:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.921 10:35:28 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.921 10:35:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.921 10:35:28 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.921 10:35:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.921 10:35:28 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.921 10:35:28 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:40.921 10:35:28 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:41.179 request: 00:07:41.179 { 00:07:41.179 "method": "env_dpdk_get_mem_stats", 00:07:41.179 "req_id": 1 00:07:41.179 } 00:07:41.179 Got JSON-RPC error response 00:07:41.179 response: 00:07:41.179 { 00:07:41.179 "code": -32601, 00:07:41.179 "message": "Method not found" 00:07:41.179 } 00:07:41.179 10:35:28 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:41.179 10:35:28 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:41.179 10:35:28 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:41.179 10:35:28 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:41.179 10:35:28 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1227548 00:07:41.179 10:35:28 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1227548 ']' 00:07:41.179 10:35:28 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1227548 00:07:41.179 10:35:28 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:41.179 10:35:28 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.179 10:35:28 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1227548 00:07:41.179 10:35:28 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.179 10:35:28 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.179 10:35:28 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1227548' 00:07:41.179 killing process with pid 1227548 00:07:41.179 10:35:28 app_cmdline -- common/autotest_common.sh@973 -- # kill 1227548 00:07:41.179 10:35:28 app_cmdline -- common/autotest_common.sh@978 -- # wait 1227548 00:07:41.751 00:07:41.751 real 0m1.630s 00:07:41.751 user 0m2.025s 00:07:41.751 sys 0m0.483s 00:07:41.751 10:35:29 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.751 10:35:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:41.751 ************************************ 00:07:41.751 END TEST app_cmdline 00:07:41.751 ************************************ 00:07:41.751 10:35:29 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:41.751 10:35:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.751 10:35:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.751 10:35:29 -- common/autotest_common.sh@10 -- # set +x 00:07:41.751 ************************************ 00:07:41.751 START TEST version 00:07:41.751 ************************************ 00:07:41.751 10:35:29 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:41.751 * Looking for test storage... 00:07:41.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:41.751 10:35:29 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:41.751 10:35:29 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:41.751 10:35:29 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:42.010 10:35:29 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:42.010 10:35:29 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.010 10:35:29 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.010 10:35:29 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.010 10:35:29 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.010 10:35:29 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.010 10:35:29 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.010 10:35:29 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.010 10:35:29 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.010 10:35:29 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.010 10:35:29 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.010 10:35:29 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.010 10:35:29 version -- scripts/common.sh@344 -- # case "$op" in 00:07:42.010 10:35:29 version -- scripts/common.sh@345 -- # : 1 00:07:42.010 10:35:29 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.010 10:35:29 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.010 10:35:29 version -- scripts/common.sh@365 -- # decimal 1 00:07:42.011 10:35:29 version -- scripts/common.sh@353 -- # local d=1 00:07:42.011 10:35:29 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.011 10:35:29 version -- scripts/common.sh@355 -- # echo 1 00:07:42.011 10:35:29 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.011 10:35:29 version -- scripts/common.sh@366 -- # decimal 2 00:07:42.011 10:35:29 version -- scripts/common.sh@353 -- # local d=2 00:07:42.011 10:35:29 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.011 10:35:29 version -- scripts/common.sh@355 -- # echo 2 00:07:42.011 10:35:29 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.011 10:35:29 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.011 10:35:29 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.011 10:35:29 version -- scripts/common.sh@368 -- # return 0 00:07:42.011 10:35:29 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.011 10:35:29 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:42.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.011 --rc genhtml_branch_coverage=1 00:07:42.011 --rc genhtml_function_coverage=1 00:07:42.011 --rc genhtml_legend=1 00:07:42.011 --rc geninfo_all_blocks=1 00:07:42.011 --rc geninfo_unexecuted_blocks=1 00:07:42.011 00:07:42.011 ' 00:07:42.011 10:35:29 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:42.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.011 --rc genhtml_branch_coverage=1 00:07:42.011 --rc genhtml_function_coverage=1 00:07:42.011 --rc genhtml_legend=1 00:07:42.011 --rc geninfo_all_blocks=1 00:07:42.011 --rc geninfo_unexecuted_blocks=1 00:07:42.011 00:07:42.011 ' 00:07:42.011 10:35:29 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:42.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.011 --rc genhtml_branch_coverage=1 00:07:42.011 --rc genhtml_function_coverage=1 00:07:42.011 --rc genhtml_legend=1 00:07:42.011 --rc geninfo_all_blocks=1 00:07:42.011 --rc geninfo_unexecuted_blocks=1 00:07:42.011 00:07:42.011 ' 00:07:42.011 10:35:29 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:42.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.011 --rc genhtml_branch_coverage=1 00:07:42.011 --rc genhtml_function_coverage=1 00:07:42.011 --rc genhtml_legend=1 00:07:42.011 --rc geninfo_all_blocks=1 00:07:42.011 --rc geninfo_unexecuted_blocks=1 00:07:42.011 00:07:42.011 ' 00:07:42.011 10:35:29 version -- app/version.sh@17 -- # get_header_version major 00:07:42.011 10:35:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:42.011 10:35:29 version -- app/version.sh@14 -- # cut -f2 00:07:42.011 10:35:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:42.011 10:35:29 version -- app/version.sh@17 -- # major=25 00:07:42.011 10:35:29 version -- app/version.sh@18 -- # get_header_version minor 00:07:42.011 10:35:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:42.011 10:35:29 version -- app/version.sh@14 -- # cut -f2 00:07:42.011 10:35:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:42.011 10:35:29 version -- app/version.sh@18 -- # minor=1 00:07:42.011 10:35:29 version -- app/version.sh@19 -- # get_header_version patch 00:07:42.011 10:35:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:42.011 10:35:29 version -- app/version.sh@14 -- # cut -f2 00:07:42.011 10:35:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:42.011 10:35:29 version -- app/version.sh@19 -- # patch=0 00:07:42.011 10:35:29 version -- app/version.sh@20 -- # get_header_version suffix 00:07:42.011 10:35:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:42.011 10:35:29 version -- app/version.sh@14 -- # cut -f2 00:07:42.011 10:35:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:42.011 10:35:29 version -- app/version.sh@20 -- # suffix=-pre 00:07:42.011 10:35:29 version -- app/version.sh@22 -- # version=25.1 00:07:42.011 10:35:29 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:42.011 10:35:29 version -- app/version.sh@28 -- # version=25.1rc0 00:07:42.011 10:35:29 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:42.011 10:35:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:42.011 10:35:29 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:42.011 10:35:29 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:42.011 00:07:42.011 real 0m0.200s 00:07:42.011 user 0m0.134s 00:07:42.011 sys 0m0.091s 00:07:42.011 10:35:29 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.011 10:35:29 version -- common/autotest_common.sh@10 -- # set +x 00:07:42.011 ************************************ 00:07:42.011 END TEST version 00:07:42.011 ************************************ 00:07:42.011 10:35:29 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:42.011 10:35:29 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:42.011 10:35:29 -- spdk/autotest.sh@194 -- # uname -s 00:07:42.011 10:35:29 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:42.011 10:35:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:42.011 10:35:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:42.011 10:35:29 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:42.011 10:35:29 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:42.011 10:35:29 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:42.011 10:35:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:42.011 10:35:29 -- common/autotest_common.sh@10 -- # set +x 00:07:42.011 10:35:29 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:42.011 10:35:29 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:42.011 10:35:29 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:42.011 10:35:29 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:42.011 10:35:29 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:42.011 10:35:29 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:42.011 10:35:29 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:42.011 10:35:29 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:42.011 10:35:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.011 10:35:29 -- common/autotest_common.sh@10 -- # set +x 00:07:42.011 ************************************ 00:07:42.011 START TEST nvmf_tcp 00:07:42.011 ************************************ 00:07:42.011 10:35:29 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:42.011 * Looking for test storage... 00:07:42.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:42.011 10:35:29 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:42.011 10:35:29 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:42.011 10:35:29 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:42.272 10:35:29 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.272 10:35:29 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:42.272 10:35:29 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.272 10:35:29 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:42.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.272 --rc genhtml_branch_coverage=1 00:07:42.272 --rc genhtml_function_coverage=1 00:07:42.272 --rc genhtml_legend=1 00:07:42.272 --rc geninfo_all_blocks=1 00:07:42.272 --rc geninfo_unexecuted_blocks=1 00:07:42.272 00:07:42.272 ' 00:07:42.272 10:35:29 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:42.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.272 --rc genhtml_branch_coverage=1 00:07:42.272 --rc genhtml_function_coverage=1 00:07:42.272 --rc genhtml_legend=1 00:07:42.272 --rc geninfo_all_blocks=1 00:07:42.272 --rc geninfo_unexecuted_blocks=1 00:07:42.272 00:07:42.272 ' 00:07:42.272 10:35:29 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:42.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.272 --rc genhtml_branch_coverage=1 00:07:42.272 --rc genhtml_function_coverage=1 00:07:42.272 --rc genhtml_legend=1 00:07:42.272 --rc geninfo_all_blocks=1 00:07:42.272 --rc geninfo_unexecuted_blocks=1 00:07:42.272 00:07:42.272 ' 00:07:42.272 10:35:29 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:42.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.272 --rc genhtml_branch_coverage=1 00:07:42.272 --rc genhtml_function_coverage=1 00:07:42.272 --rc genhtml_legend=1 00:07:42.272 --rc geninfo_all_blocks=1 00:07:42.272 --rc geninfo_unexecuted_blocks=1 00:07:42.272 00:07:42.272 ' 00:07:42.272 10:35:29 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:42.272 10:35:29 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:42.272 10:35:29 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:42.272 10:35:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:42.272 10:35:29 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.272 10:35:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:42.272 ************************************ 00:07:42.272 START TEST nvmf_target_core 00:07:42.272 ************************************ 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:42.272 * Looking for test storage... 00:07:42.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:42.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.272 --rc genhtml_branch_coverage=1 00:07:42.272 --rc genhtml_function_coverage=1 00:07:42.272 --rc genhtml_legend=1 00:07:42.272 --rc geninfo_all_blocks=1 00:07:42.272 --rc geninfo_unexecuted_blocks=1 00:07:42.272 00:07:42.272 ' 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:42.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.272 --rc genhtml_branch_coverage=1 00:07:42.272 --rc genhtml_function_coverage=1 00:07:42.272 --rc genhtml_legend=1 00:07:42.272 --rc geninfo_all_blocks=1 00:07:42.272 --rc geninfo_unexecuted_blocks=1 00:07:42.272 00:07:42.272 ' 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:42.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.272 --rc genhtml_branch_coverage=1 00:07:42.272 --rc genhtml_function_coverage=1 00:07:42.272 --rc genhtml_legend=1 00:07:42.272 --rc geninfo_all_blocks=1 00:07:42.272 --rc geninfo_unexecuted_blocks=1 00:07:42.272 00:07:42.272 ' 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:42.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.272 --rc genhtml_branch_coverage=1 00:07:42.272 --rc genhtml_function_coverage=1 00:07:42.272 --rc genhtml_legend=1 00:07:42.272 --rc geninfo_all_blocks=1 00:07:42.272 --rc geninfo_unexecuted_blocks=1 00:07:42.272 00:07:42.272 ' 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.272 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:42.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:42.273 ************************************ 00:07:42.273 START TEST nvmf_abort 00:07:42.273 ************************************ 00:07:42.273 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:42.532 * Looking for test storage... 00:07:42.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.532 10:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:42.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.532 --rc genhtml_branch_coverage=1 00:07:42.532 --rc genhtml_function_coverage=1 00:07:42.532 --rc genhtml_legend=1 00:07:42.532 --rc geninfo_all_blocks=1 00:07:42.532 --rc geninfo_unexecuted_blocks=1 00:07:42.532 00:07:42.532 ' 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:42.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.532 --rc genhtml_branch_coverage=1 00:07:42.532 --rc genhtml_function_coverage=1 00:07:42.532 --rc genhtml_legend=1 00:07:42.532 --rc geninfo_all_blocks=1 00:07:42.532 --rc geninfo_unexecuted_blocks=1 00:07:42.532 00:07:42.532 ' 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:42.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.532 --rc genhtml_branch_coverage=1 00:07:42.532 --rc genhtml_function_coverage=1 00:07:42.532 --rc genhtml_legend=1 00:07:42.532 --rc geninfo_all_blocks=1 00:07:42.532 --rc geninfo_unexecuted_blocks=1 00:07:42.532 00:07:42.532 ' 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:42.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.532 --rc genhtml_branch_coverage=1 00:07:42.532 --rc genhtml_function_coverage=1 00:07:42.532 --rc genhtml_legend=1 00:07:42.532 --rc geninfo_all_blocks=1 00:07:42.532 --rc geninfo_unexecuted_blocks=1 00:07:42.532 00:07:42.532 ' 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:42.532 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:42.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:42.533 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.065 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:45.066 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:45.066 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:45.066 Found net devices under 0000:09:00.0: cvl_0_0 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:45.066 Found net devices under 0000:09:00.1: cvl_0_1 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:45.066 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:45.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:07:45.066 00:07:45.066 --- 10.0.0.2 ping statistics --- 00:07:45.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.067 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:07:45.067 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:45.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:07:45.067 00:07:45.067 --- 10.0.0.1 ping statistics --- 00:07:45.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.067 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:07:45.067 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.067 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:45.067 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:45.067 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.067 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:45.067 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:45.067 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.067 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:45.067 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:45.067 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:45.067 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:45.067 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:45.067 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.067 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1229636 00:07:45.067 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:45.067 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1229636 00:07:45.067 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1229636 ']' 00:07:45.067 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.067 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.067 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.067 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.067 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.067 [2024-11-19 10:35:32.457926] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:45.067 [2024-11-19 10:35:32.458003] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.067 [2024-11-19 10:35:32.526382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:45.067 [2024-11-19 10:35:32.583143] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.067 [2024-11-19 10:35:32.583193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.067 [2024-11-19 10:35:32.583221] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:45.067 [2024-11-19 10:35:32.583232] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:45.067 [2024-11-19 10:35:32.583241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.067 [2024-11-19 10:35:32.584812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.067 [2024-11-19 10:35:32.584875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.067 [2024-11-19 10:35:32.584879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.325 [2024-11-19 10:35:32.732936] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.325 Malloc0 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.325 Delay0 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.325 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:45.326 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.326 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.326 [2024-11-19 10:35:32.809754] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.326 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.326 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:45.326 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.326 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.326 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.326 10:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:45.326 [2024-11-19 10:35:32.925188] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:47.855 Initializing NVMe Controllers 00:07:47.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:47.855 controller IO queue size 128 less than required 00:07:47.855 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:47.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:47.855 Initialization complete. Launching workers. 00:07:47.855 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28727 00:07:47.855 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28792, failed to submit 62 00:07:47.855 success 28731, unsuccessful 61, failed 0 00:07:47.855 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:47.855 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.855 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:47.855 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.855 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:47.855 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:47.855 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:47.855 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:47.855 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:47.855 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:47.855 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:47.855 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:47.855 rmmod nvme_tcp 00:07:47.855 rmmod nvme_fabrics 00:07:47.855 rmmod nvme_keyring 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1229636 ']' 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1229636 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1229636 ']' 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1229636 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1229636 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1229636' 00:07:47.856 killing process with pid 1229636 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1229636 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1229636 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.856 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.763 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:49.763 00:07:49.763 real 0m7.487s 00:07:49.763 user 0m10.629s 00:07:49.763 sys 0m2.593s 00:07:49.763 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.763 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.763 ************************************ 00:07:49.763 END TEST nvmf_abort 00:07:49.763 ************************************ 00:07:49.763 10:35:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:49.763 10:35:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:49.763 10:35:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.763 10:35:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:50.039 ************************************ 00:07:50.039 START TEST nvmf_ns_hotplug_stress 00:07:50.039 ************************************ 00:07:50.039 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:50.039 * Looking for test storage... 00:07:50.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.039 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:50.039 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:50.039 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:50.039 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:50.039 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.039 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.039 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.039 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.039 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.039 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.039 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.039 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.039 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.039 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.039 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.039 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:50.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.040 --rc genhtml_branch_coverage=1 00:07:50.040 --rc genhtml_function_coverage=1 00:07:50.040 --rc genhtml_legend=1 00:07:50.040 --rc geninfo_all_blocks=1 00:07:50.040 --rc geninfo_unexecuted_blocks=1 00:07:50.040 00:07:50.040 ' 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:50.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.040 --rc genhtml_branch_coverage=1 00:07:50.040 --rc genhtml_function_coverage=1 00:07:50.040 --rc genhtml_legend=1 00:07:50.040 --rc geninfo_all_blocks=1 00:07:50.040 --rc geninfo_unexecuted_blocks=1 00:07:50.040 00:07:50.040 ' 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:50.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.040 --rc genhtml_branch_coverage=1 00:07:50.040 --rc genhtml_function_coverage=1 00:07:50.040 --rc genhtml_legend=1 00:07:50.040 --rc geninfo_all_blocks=1 00:07:50.040 --rc geninfo_unexecuted_blocks=1 00:07:50.040 00:07:50.040 ' 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:50.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.040 --rc genhtml_branch_coverage=1 00:07:50.040 --rc genhtml_function_coverage=1 00:07:50.040 --rc genhtml_legend=1 00:07:50.040 --rc geninfo_all_blocks=1 00:07:50.040 --rc geninfo_unexecuted_blocks=1 00:07:50.040 00:07:50.040 ' 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:50.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:50.040 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:52.574 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:52.574 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:52.574 Found net devices under 0000:09:00.0: cvl_0_0 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:52.574 Found net devices under 0000:09:00.1: cvl_0_1 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.574 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:52.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:07:52.575 00:07:52.575 --- 10.0.0.2 ping statistics --- 00:07:52.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.575 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:07:52.575 00:07:52.575 --- 10.0.0.1 ping statistics --- 00:07:52.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.575 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1231993 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1231993 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1231993 ']' 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.575 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:52.575 [2024-11-19 10:35:39.975903] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:52.575 [2024-11-19 10:35:39.975984] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.575 [2024-11-19 10:35:40.054738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:52.575 [2024-11-19 10:35:40.114933] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.575 [2024-11-19 10:35:40.114983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.575 [2024-11-19 10:35:40.115011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.575 [2024-11-19 10:35:40.115022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.575 [2024-11-19 10:35:40.115033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.575 [2024-11-19 10:35:40.116625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.575 [2024-11-19 10:35:40.116688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.575 [2024-11-19 10:35:40.116691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.832 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.832 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:52.832 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:52.832 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:52.832 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:52.832 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.832 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:52.832 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:53.090 [2024-11-19 10:35:40.506247] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.090 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:53.347 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.605 [2024-11-19 10:35:41.061248] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.605 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:53.863 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:54.121 Malloc0 00:07:54.121 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:54.379 Delay0 00:07:54.379 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.636 10:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:54.894 NULL1 00:07:54.894 10:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:55.152 10:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1232299 00:07:55.152 10:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:55.152 10:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:07:55.152 10:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.409 10:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.667 10:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:55.667 10:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:55.951 true 00:07:55.951 10:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:07:55.951 10:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.241 10:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.500 10:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:56.500 10:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:56.758 true 00:07:56.758 10:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:07:56.758 10:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.015 10:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.273 10:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:57.273 10:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:57.531 true 00:07:57.788 10:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:07:57.789 10:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.720 Read completed with error (sct=0, sc=11) 00:07:58.720 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.978 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:58.978 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:59.236 true 00:07:59.236 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:07:59.236 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.493 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.751 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:59.751 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:00.009 true 00:08:00.009 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:00.009 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.942 10:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.942 10:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:00.942 10:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:01.200 true 00:08:01.200 10:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:01.200 10:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.457 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.715 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:01.715 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:01.974 true 00:08:02.233 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:02.233 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.490 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.748 10:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:02.748 10:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:03.006 true 00:08:03.006 10:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:03.006 10:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.937 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.195 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:04.195 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:04.505 true 00:08:04.505 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:04.505 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.763 10:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.021 10:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:05.021 10:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:05.279 true 00:08:05.279 10:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:05.279 10:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.536 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.794 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:05.794 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:06.052 true 00:08:06.052 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:06.052 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.985 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.243 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:07.243 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:07.500 true 00:08:07.501 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:07.501 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.758 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.016 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:08.016 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:08.274 true 00:08:08.274 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:08.274 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.224 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.481 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:09.481 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:09.738 true 00:08:09.738 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:09.738 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.996 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.254 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:10.254 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:10.512 true 00:08:10.512 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:10.512 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.445 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.703 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:11.703 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:11.960 true 00:08:11.960 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:11.960 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.219 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.477 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:12.477 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:12.734 true 00:08:12.734 10:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:12.734 10:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.992 10:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.250 10:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:13.250 10:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:13.507 true 00:08:13.507 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:13.507 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.440 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.698 10:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:14.698 10:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:14.955 true 00:08:14.955 10:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:14.955 10:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.213 10:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.470 10:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:15.471 10:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:15.728 true 00:08:15.728 10:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:15.728 10:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.986 10:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.244 10:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:16.244 10:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:16.501 true 00:08:16.501 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:16.501 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:17.434 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:17.692 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:17.692 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:17.950 true 00:08:18.208 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:18.208 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.465 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.722 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:18.723 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:18.980 true 00:08:18.981 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:18.981 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.238 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.496 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:19.496 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:19.754 true 00:08:19.754 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:19.754 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.686 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.943 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:20.944 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:21.201 true 00:08:21.201 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:21.201 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.766 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.766 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:21.767 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:22.024 true 00:08:22.282 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:22.282 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.540 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.797 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:22.797 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:23.056 true 00:08:23.056 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:23.056 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.990 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.247 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:24.247 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:24.247 true 00:08:24.505 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:24.505 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.763 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.020 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:25.020 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:25.278 true 00:08:25.278 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:25.278 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.212 10:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.212 Initializing NVMe Controllers 00:08:26.212 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:26.212 Controller IO queue size 128, less than required. 00:08:26.212 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:26.212 Controller IO queue size 128, less than required. 00:08:26.212 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:26.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:26.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:26.212 Initialization complete. Launching workers. 00:08:26.212 ======================================================== 00:08:26.212 Latency(us) 00:08:26.212 Device Information : IOPS MiB/s Average min max 00:08:26.212 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 499.43 0.24 103845.40 2700.45 1011929.14 00:08:26.212 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8267.30 4.04 15436.47 3336.12 544074.57 00:08:26.212 ======================================================== 00:08:26.212 Total : 8766.73 4.28 20473.06 2700.45 1011929.14 00:08:26.212 00:08:26.212 10:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:26.212 10:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:26.470 true 00:08:26.470 10:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1232299 00:08:26.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1232299) - No such process 00:08:26.470 10:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1232299 00:08:26.470 10:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.727 10:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:26.985 10:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:26.985 10:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:26.985 10:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:26.985 10:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:26.985 10:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:27.243 null0 00:08:27.243 10:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:27.243 10:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:27.243 10:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:27.501 null1 00:08:27.501 10:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:27.501 10:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:27.501 10:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:27.759 null2 00:08:27.759 10:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:27.759 10:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:27.759 10:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:28.016 null3 00:08:28.016 10:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:28.016 10:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:28.016 10:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:28.274 null4 00:08:28.274 10:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:28.274 10:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:28.274 10:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:28.533 null5 00:08:28.533 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:28.533 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:28.533 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:28.791 null6 00:08:29.050 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:29.050 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:29.050 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:29.309 null7 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1237118 1237119 1237121 1237123 1237125 1237127 1237129 1237131 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.310 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:29.569 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:29.569 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:29.569 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.569 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:29.569 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:29.569 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:29.569 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:29.569 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.828 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:30.087 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:30.087 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:30.087 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:30.087 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.087 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:30.087 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:30.087 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:30.087 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:30.604 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:30.604 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:30.605 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:30.605 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.605 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:30.605 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:30.605 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:30.605 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.170 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:31.429 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:31.429 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:31.429 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:31.429 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:31.429 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:31.429 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:31.429 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.429 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.687 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:31.945 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:31.945 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:31.945 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:31.945 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:31.945 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:31.945 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:31.945 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.945 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.204 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:32.463 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:32.463 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:32.463 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:32.463 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:32.463 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.463 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:32.463 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:32.463 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:32.721 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.721 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.721 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:32.721 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.721 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.721 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:32.721 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.721 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.721 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:32.721 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.721 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.721 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:32.721 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.721 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.721 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:32.721 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.721 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.721 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:32.980 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.980 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.980 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:32.980 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.980 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.980 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:33.238 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:33.238 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:33.238 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.238 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:33.238 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.238 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:33.238 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:33.238 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:33.496 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.496 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.496 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:33.496 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.496 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.496 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:33.496 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.496 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.497 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:33.497 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.497 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.497 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:33.497 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.497 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.497 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:33.497 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.497 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.497 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:33.497 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.497 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.497 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:33.497 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.497 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.497 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:33.755 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:33.755 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:33.755 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:33.755 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:33.755 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:33.755 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.755 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:33.755 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.013 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.272 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:34.272 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.272 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:34.272 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:34.272 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.272 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:34.272 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.272 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.531 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.789 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.047 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:35.047 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:35.047 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:35.047 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.047 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:35.047 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:35.047 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:35.305 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:35.305 rmmod nvme_tcp 00:08:35.306 rmmod nvme_fabrics 00:08:35.306 rmmod nvme_keyring 00:08:35.306 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:35.306 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:35.306 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:35.306 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1231993 ']' 00:08:35.306 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1231993 00:08:35.306 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1231993 ']' 00:08:35.306 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1231993 00:08:35.306 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:35.306 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.306 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1231993 00:08:35.306 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:35.306 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:35.306 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1231993' 00:08:35.306 killing process with pid 1231993 00:08:35.306 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1231993 00:08:35.306 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1231993 00:08:35.565 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:35.565 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:35.565 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:35.565 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:35.565 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:35.565 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:35.565 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:35.565 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:35.565 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:35.565 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.565 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.565 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.103 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:38.103 00:08:38.103 real 0m47.715s 00:08:38.103 user 3m42.323s 00:08:38.103 sys 0m15.945s 00:08:38.103 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.103 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.103 ************************************ 00:08:38.103 END TEST nvmf_ns_hotplug_stress 00:08:38.103 ************************************ 00:08:38.103 10:36:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:38.103 10:36:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:38.103 10:36:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.103 10:36:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:38.103 ************************************ 00:08:38.103 START TEST nvmf_delete_subsystem 00:08:38.103 ************************************ 00:08:38.103 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:38.103 * Looking for test storage... 00:08:38.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.103 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:38.103 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:38.103 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:38.103 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:38.103 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.103 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.103 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:38.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.104 --rc genhtml_branch_coverage=1 00:08:38.104 --rc genhtml_function_coverage=1 00:08:38.104 --rc genhtml_legend=1 00:08:38.104 --rc geninfo_all_blocks=1 00:08:38.104 --rc geninfo_unexecuted_blocks=1 00:08:38.104 00:08:38.104 ' 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:38.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.104 --rc genhtml_branch_coverage=1 00:08:38.104 --rc genhtml_function_coverage=1 00:08:38.104 --rc genhtml_legend=1 00:08:38.104 --rc geninfo_all_blocks=1 00:08:38.104 --rc geninfo_unexecuted_blocks=1 00:08:38.104 00:08:38.104 ' 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:38.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.104 --rc genhtml_branch_coverage=1 00:08:38.104 --rc genhtml_function_coverage=1 00:08:38.104 --rc genhtml_legend=1 00:08:38.104 --rc geninfo_all_blocks=1 00:08:38.104 --rc geninfo_unexecuted_blocks=1 00:08:38.104 00:08:38.104 ' 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:38.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.104 --rc genhtml_branch_coverage=1 00:08:38.104 --rc genhtml_function_coverage=1 00:08:38.104 --rc genhtml_legend=1 00:08:38.104 --rc geninfo_all_blocks=1 00:08:38.104 --rc geninfo_unexecuted_blocks=1 00:08:38.104 00:08:38.104 ' 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:38.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:38.104 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.105 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.105 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.105 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:38.105 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:38.105 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:38.105 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.015 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.015 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:40.015 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:40.015 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:40.015 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:40.015 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:40.015 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:40.015 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:40.015 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:40.015 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:40.015 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:40.015 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:40.015 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:40.016 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:40.016 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:40.016 Found net devices under 0000:09:00.0: cvl_0_0 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:40.016 Found net devices under 0000:09:00.1: cvl_0_1 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:40.016 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.017 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.017 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.017 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:40.017 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:40.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:08:40.017 00:08:40.017 --- 10.0.0.2 ping statistics --- 00:08:40.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.017 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:08:40.017 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:08:40.276 00:08:40.276 --- 10.0.0.1 ping statistics --- 00:08:40.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.276 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:08:40.276 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.276 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:40.276 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:40.276 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.276 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:40.276 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:40.276 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.276 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:40.276 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:40.276 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:40.276 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:40.276 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:40.276 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.276 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1239914 00:08:40.276 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:40.276 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1239914 00:08:40.276 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1239914 ']' 00:08:40.276 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.276 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.276 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.276 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.276 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.276 [2024-11-19 10:36:27.714068] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:40.276 [2024-11-19 10:36:27.714166] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.276 [2024-11-19 10:36:27.786770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:40.276 [2024-11-19 10:36:27.845448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.276 [2024-11-19 10:36:27.845500] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.276 [2024-11-19 10:36:27.845528] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.276 [2024-11-19 10:36:27.845539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.276 [2024-11-19 10:36:27.845548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.276 [2024-11-19 10:36:27.847129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.276 [2024-11-19 10:36:27.847135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.534 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.534 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:40.534 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:40.534 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:40.534 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.534 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.534 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.534 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.534 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.534 [2024-11-19 10:36:27.999916] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.534 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.534 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:40.535 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.535 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.535 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.535 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.535 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 [2024-11-19 10:36:28.016122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.535 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.535 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:40.535 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.535 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 NULL1 00:08:40.535 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.535 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:40.535 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.535 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 Delay0 00:08:40.535 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.535 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.535 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.535 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.535 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1240050 00:08:40.535 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:40.535 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:40.535 [2024-11-19 10:36:28.100995] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:42.434 10:36:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:42.434 10:36:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.434 10:36:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 starting I/O failed: -6 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 starting I/O failed: -6 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 starting I/O failed: -6 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 starting I/O failed: -6 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 starting I/O failed: -6 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 starting I/O failed: -6 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 starting I/O failed: -6 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 starting I/O failed: -6 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 starting I/O failed: -6 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 starting I/O failed: -6 00:08:42.693 [2024-11-19 10:36:30.222251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3720000c40 is same with the state(6) to be set 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 starting I/O failed: -6 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 starting I/O failed: -6 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 starting I/O failed: -6 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 starting I/O failed: -6 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 starting I/O failed: -6 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 starting I/O failed: -6 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 starting I/O failed: -6 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 starting I/O failed: -6 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 starting I/O failed: -6 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 starting I/O failed: -6 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 [2024-11-19 10:36:30.223555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924a0 is same with the state(6) to be set 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Write completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.693 Read completed with error (sct=0, sc=8) 00:08:42.694 Write completed with error (sct=0, sc=8) 00:08:42.694 Write completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Write completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Write completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Write completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Write completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Write completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Write completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Write completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Write completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 [2024-11-19 10:36:30.224082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f372000d4d0 is same with the state(6) to be set 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:42.694 Read completed with error (sct=0, sc=8) 00:08:43.628 [2024-11-19 10:36:31.196721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9939a0 is same with the state(6) to be set 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 [2024-11-19 10:36:31.226035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f372000d020 is same with the state(6) to be set 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 [2024-11-19 10:36:31.226370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f372000d800 is same with the state(6) to be set 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 [2024-11-19 10:36:31.227755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9922c0 is same with the state(6) to be set 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Read completed with error (sct=0, sc=8) 00:08:43.628 Write completed with error (sct=0, sc=8) 00:08:43.628 [2024-11-19 10:36:31.228463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992680 is same with the state(6) to be set 00:08:43.628 Initializing NVMe Controllers 00:08:43.628 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:43.628 Controller IO queue size 128, less than required. 00:08:43.628 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:43.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:43.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:43.628 Initialization complete. Launching workers. 00:08:43.628 ======================================================== 00:08:43.628 Latency(us) 00:08:43.628 Device Information : IOPS MiB/s Average min max 00:08:43.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 158.28 0.08 924980.61 584.53 1013022.44 00:08:43.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.31 0.08 932487.93 1850.49 1011978.79 00:08:43.628 ======================================================== 00:08:43.628 Total : 312.59 0.15 928686.60 584.53 1013022.44 00:08:43.628 00:08:43.629 [2024-11-19 10:36:31.228964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9939a0 (9): Bad file descriptor 00:08:43.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:43.629 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.629 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:43.629 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1240050 00:08:43.629 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:44.195 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:44.195 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1240050 00:08:44.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1240050) - No such process 00:08:44.195 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1240050 00:08:44.195 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:44.195 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1240050 00:08:44.195 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:44.195 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:44.195 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:44.195 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:44.195 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1240050 00:08:44.195 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:44.195 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:44.195 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:44.195 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:44.195 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:44.195 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.196 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.196 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.196 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.196 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.196 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.196 [2024-11-19 10:36:31.750545] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.196 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.196 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.196 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.196 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.196 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.196 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1240463 00:08:44.196 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:44.196 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:44.196 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1240463 00:08:44.196 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:44.196 [2024-11-19 10:36:31.813226] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:44.762 10:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:44.762 10:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1240463 00:08:44.762 10:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:45.327 10:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:45.327 10:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1240463 00:08:45.327 10:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:45.891 10:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:45.891 10:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1240463 00:08:45.891 10:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:46.457 10:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:46.457 10:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1240463 00:08:46.457 10:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:46.715 10:36:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:46.715 10:36:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1240463 00:08:46.715 10:36:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:47.280 10:36:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:47.280 10:36:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1240463 00:08:47.280 10:36:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:47.538 Initializing NVMe Controllers 00:08:47.538 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:47.538 Controller IO queue size 128, less than required. 00:08:47.538 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:47.538 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:47.538 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:47.538 Initialization complete. Launching workers. 00:08:47.538 ======================================================== 00:08:47.538 Latency(us) 00:08:47.538 Device Information : IOPS MiB/s Average min max 00:08:47.538 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004474.95 1000206.46 1012337.16 00:08:47.538 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004676.93 1000199.00 1040923.71 00:08:47.538 ======================================================== 00:08:47.538 Total : 256.00 0.12 1004575.94 1000199.00 1040923.71 00:08:47.538 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1240463 00:08:47.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1240463) - No such process 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1240463 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:47.796 rmmod nvme_tcp 00:08:47.796 rmmod nvme_fabrics 00:08:47.796 rmmod nvme_keyring 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1239914 ']' 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1239914 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1239914 ']' 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1239914 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1239914 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:47.796 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1239914' 00:08:47.796 killing process with pid 1239914 00:08:47.797 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1239914 00:08:47.797 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1239914 00:08:48.056 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:48.056 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:48.056 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:48.056 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:48.056 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:48.056 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:48.056 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:48.056 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:48.056 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:48.056 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.056 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.056 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:50.599 00:08:50.599 real 0m12.467s 00:08:50.599 user 0m28.004s 00:08:50.599 sys 0m2.972s 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.599 ************************************ 00:08:50.599 END TEST nvmf_delete_subsystem 00:08:50.599 ************************************ 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:50.599 ************************************ 00:08:50.599 START TEST nvmf_host_management 00:08:50.599 ************************************ 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:50.599 * Looking for test storage... 00:08:50.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.599 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:50.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.599 --rc genhtml_branch_coverage=1 00:08:50.600 --rc genhtml_function_coverage=1 00:08:50.600 --rc genhtml_legend=1 00:08:50.600 --rc geninfo_all_blocks=1 00:08:50.600 --rc geninfo_unexecuted_blocks=1 00:08:50.600 00:08:50.600 ' 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:50.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.600 --rc genhtml_branch_coverage=1 00:08:50.600 --rc genhtml_function_coverage=1 00:08:50.600 --rc genhtml_legend=1 00:08:50.600 --rc geninfo_all_blocks=1 00:08:50.600 --rc geninfo_unexecuted_blocks=1 00:08:50.600 00:08:50.600 ' 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:50.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.600 --rc genhtml_branch_coverage=1 00:08:50.600 --rc genhtml_function_coverage=1 00:08:50.600 --rc genhtml_legend=1 00:08:50.600 --rc geninfo_all_blocks=1 00:08:50.600 --rc geninfo_unexecuted_blocks=1 00:08:50.600 00:08:50.600 ' 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:50.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.600 --rc genhtml_branch_coverage=1 00:08:50.600 --rc genhtml_function_coverage=1 00:08:50.600 --rc genhtml_legend=1 00:08:50.600 --rc geninfo_all_blocks=1 00:08:50.600 --rc geninfo_unexecuted_blocks=1 00:08:50.600 00:08:50.600 ' 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:50.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:50.600 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:52.563 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:52.563 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:52.563 Found net devices under 0000:09:00.0: cvl_0_0 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:52.563 Found net devices under 0000:09:00.1: cvl_0_1 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.563 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:52.564 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:52.564 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:52.564 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:52.564 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:52.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:08:52.564 00:08:52.564 --- 10.0.0.2 ping statistics --- 00:08:52.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.564 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:52.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:08:52.564 00:08:52.564 --- 10.0.0.1 ping statistics --- 00:08:52.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.564 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1242894 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1242894 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1242894 ']' 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.564 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:52.564 [2024-11-19 10:36:40.154268] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:52.564 [2024-11-19 10:36:40.154395] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.822 [2024-11-19 10:36:40.231856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.822 [2024-11-19 10:36:40.293276] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.822 [2024-11-19 10:36:40.293348] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.822 [2024-11-19 10:36:40.293378] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.822 [2024-11-19 10:36:40.293390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.822 [2024-11-19 10:36:40.293400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.822 [2024-11-19 10:36:40.294914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.822 [2024-11-19 10:36:40.294980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.822 [2024-11-19 10:36:40.295044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:52.822 [2024-11-19 10:36:40.295047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.822 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.822 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:52.822 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:52.822 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:52.822 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.080 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.081 [2024-11-19 10:36:40.448886] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.081 Malloc0 00:08:53.081 [2024-11-19 10:36:40.530387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1242993 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1242993 /var/tmp/bdevperf.sock 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1242993 ']' 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:53.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:53.081 { 00:08:53.081 "params": { 00:08:53.081 "name": "Nvme$subsystem", 00:08:53.081 "trtype": "$TEST_TRANSPORT", 00:08:53.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.081 "adrfam": "ipv4", 00:08:53.081 "trsvcid": "$NVMF_PORT", 00:08:53.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.081 "hdgst": ${hdgst:-false}, 00:08:53.081 "ddgst": ${ddgst:-false} 00:08:53.081 }, 00:08:53.081 "method": "bdev_nvme_attach_controller" 00:08:53.081 } 00:08:53.081 EOF 00:08:53.081 )") 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:53.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:53.081 "params": { 00:08:53.081 "name": "Nvme0", 00:08:53.081 "trtype": "tcp", 00:08:53.081 "traddr": "10.0.0.2", 00:08:53.081 "adrfam": "ipv4", 00:08:53.081 "trsvcid": "4420", 00:08:53.081 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:53.081 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:53.081 "hdgst": false, 00:08:53.081 "ddgst": false 00:08:53.081 }, 00:08:53.081 "method": "bdev_nvme_attach_controller" 00:08:53.081 }' 00:08:53.081 [2024-11-19 10:36:40.613185] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:53.081 [2024-11-19 10:36:40.613261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1242993 ] 00:08:53.081 [2024-11-19 10:36:40.682332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.339 [2024-11-19 10:36:40.743442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.597 Running I/O for 10 seconds... 00:08:53.597 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.597 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:53.597 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:53.597 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.597 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.597 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.597 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:53.597 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:53.597 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:53.597 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:53.597 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:53.597 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:53.597 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:53.597 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:53.597 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:53.597 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.597 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:53.597 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.597 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.597 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:53.597 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:53.597 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:53.858 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:53.858 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:53.858 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:53.858 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:53.858 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.858 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.858 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.858 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=540 00:08:53.858 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 540 -ge 100 ']' 00:08:53.858 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:53.858 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:53.858 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:53.858 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:53.858 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.858 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.858 [2024-11-19 10:36:41.357183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.858 [2024-11-19 10:36:41.357929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.859 [2024-11-19 10:36:41.357940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.859 [2024-11-19 10:36:41.357952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.859 [2024-11-19 10:36:41.357963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.859 [2024-11-19 10:36:41.357974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.859 [2024-11-19 10:36:41.357985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.859 [2024-11-19 10:36:41.357996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.859 [2024-11-19 10:36:41.358008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.859 [2024-11-19 10:36:41.358019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.859 [2024-11-19 10:36:41.358030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0f10 is same with the state(6) to be set 00:08:53.859 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.859 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:53.859 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.859 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.859 [2024-11-19 10:36:41.365337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:53.859 [2024-11-19 10:36:41.365380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.859 [2024-11-19 10:36:41.365399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:53.859 [2024-11-19 10:36:41.365413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.859 [2024-11-19 10:36:41.365427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:53.859 [2024-11-19 10:36:41.365440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.859 [2024-11-19 10:36:41.365454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:53.859 [2024-11-19 10:36:41.365467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.859 [2024-11-19 10:36:41.365479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1a40 is same with the state(6) to be set 00:08:53.859 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.859 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:53.859 [2024-11-19 10:36:41.375442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c1a40 (9): Bad file descriptor 00:08:53.859 [2024-11-19 10:36:41.375539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.859 [2024-11-19 10:36:41.375563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.859 [2024-11-19 10:36:41.375587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.859 [2024-11-19 10:36:41.375603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.859 [2024-11-19 10:36:41.375634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.859 [2024-11-19 10:36:41.375648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.859 [2024-11-19 10:36:41.375663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.859 [2024-11-19 10:36:41.375676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.859 [2024-11-19 10:36:41.375691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.859 [2024-11-19 10:36:41.375705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.859 [2024-11-19 10:36:41.375719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.859 [2024-11-19 10:36:41.375733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.859 [2024-11-19 10:36:41.375747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.859 [2024-11-19 10:36:41.375761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.859 [2024-11-19 10:36:41.375775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.859 [2024-11-19 10:36:41.375789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.859 [2024-11-19 10:36:41.375803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.859 [2024-11-19 10:36:41.375817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.859 [2024-11-19 10:36:41.375832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.859 [2024-11-19 10:36:41.375846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.859 [2024-11-19 10:36:41.375861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.859 [2024-11-19 10:36:41.375874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.859 [2024-11-19 10:36:41.375889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.859 [2024-11-19 10:36:41.375903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.859 [2024-11-19 10:36:41.375918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.859 [2024-11-19 10:36:41.375936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.859 [2024-11-19 10:36:41.375952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.859 [2024-11-19 10:36:41.375966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.859 [2024-11-19 10:36:41.375981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.859 [2024-11-19 10:36:41.375994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.859 [2024-11-19 10:36:41.376009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.859 [2024-11-19 10:36:41.376023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.859 [2024-11-19 10:36:41.376038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.859 [2024-11-19 10:36:41.376051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.859 [2024-11-19 10:36:41.376066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.859 [2024-11-19 10:36:41.376081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.859 [2024-11-19 10:36:41.376096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.859 [2024-11-19 10:36:41.376109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.859 [2024-11-19 10:36:41.376124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.376975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.376989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.377004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.377018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.377032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.377046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.377060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.377074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.377092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.377107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.377121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.860 [2024-11-19 10:36:41.377135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.860 [2024-11-19 10:36:41.377150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.861 [2024-11-19 10:36:41.377164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.861 [2024-11-19 10:36:41.377179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.861 [2024-11-19 10:36:41.377193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.861 [2024-11-19 10:36:41.377207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.861 [2024-11-19 10:36:41.377221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.861 [2024-11-19 10:36:41.377236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.861 [2024-11-19 10:36:41.377250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.861 [2024-11-19 10:36:41.377265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.861 [2024-11-19 10:36:41.377279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.861 [2024-11-19 10:36:41.377301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.861 [2024-11-19 10:36:41.377342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.861 [2024-11-19 10:36:41.377359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.861 [2024-11-19 10:36:41.377373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.861 [2024-11-19 10:36:41.377389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.861 [2024-11-19 10:36:41.377403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.861 [2024-11-19 10:36:41.377419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.861 [2024-11-19 10:36:41.377433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.861 [2024-11-19 10:36:41.377448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.861 [2024-11-19 10:36:41.377463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.861 [2024-11-19 10:36:41.377477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.861 [2024-11-19 10:36:41.377497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.861 [2024-11-19 10:36:41.377514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.861 [2024-11-19 10:36:41.377528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.861 [2024-11-19 10:36:41.378748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:53.861 task offset: 81664 on job bdev=Nvme0n1 fails 00:08:53.861 00:08:53.861 Latency(us) 00:08:53.861 [2024-11-19T09:36:41.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.861 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:53.861 Job: Nvme0n1 ended in about 0.41 seconds with error 00:08:53.861 Verification LBA range: start 0x0 length 0x400 00:08:53.861 Nvme0n1 : 0.41 1545.94 96.62 155.08 0.00 36567.12 2500.08 34952.53 00:08:53.861 [2024-11-19T09:36:41.484Z] =================================================================================================================== 00:08:53.861 [2024-11-19T09:36:41.484Z] Total : 1545.94 96.62 155.08 0.00 36567.12 2500.08 34952.53 00:08:53.861 [2024-11-19 10:36:41.380655] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:53.861 [2024-11-19 10:36:41.385527] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:54.817 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1242993 00:08:54.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1242993) - No such process 00:08:54.817 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:54.817 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:54.817 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:54.817 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:54.817 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:54.817 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:54.817 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:54.817 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:54.817 { 00:08:54.817 "params": { 00:08:54.817 "name": "Nvme$subsystem", 00:08:54.817 "trtype": "$TEST_TRANSPORT", 00:08:54.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:54.817 "adrfam": "ipv4", 00:08:54.817 "trsvcid": "$NVMF_PORT", 00:08:54.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:54.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:54.817 "hdgst": ${hdgst:-false}, 00:08:54.817 "ddgst": ${ddgst:-false} 00:08:54.817 }, 00:08:54.817 "method": "bdev_nvme_attach_controller" 00:08:54.817 } 00:08:54.817 EOF 00:08:54.817 )") 00:08:54.817 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:54.817 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:54.817 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:54.817 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:54.817 "params": { 00:08:54.817 "name": "Nvme0", 00:08:54.817 "trtype": "tcp", 00:08:54.817 "traddr": "10.0.0.2", 00:08:54.817 "adrfam": "ipv4", 00:08:54.817 "trsvcid": "4420", 00:08:54.817 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:54.817 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:54.817 "hdgst": false, 00:08:54.817 "ddgst": false 00:08:54.817 }, 00:08:54.817 "method": "bdev_nvme_attach_controller" 00:08:54.817 }' 00:08:54.817 [2024-11-19 10:36:42.424666] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:54.817 [2024-11-19 10:36:42.424750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243220 ] 00:08:55.075 [2024-11-19 10:36:42.494918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.075 [2024-11-19 10:36:42.554040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.333 Running I/O for 1 seconds... 00:08:56.266 1600.00 IOPS, 100.00 MiB/s 00:08:56.266 Latency(us) 00:08:56.266 [2024-11-19T09:36:43.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.266 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:56.266 Verification LBA range: start 0x0 length 0x400 00:08:56.266 Nvme0n1 : 1.02 1624.39 101.52 0.00 0.00 38770.80 6213.78 35146.71 00:08:56.266 [2024-11-19T09:36:43.889Z] =================================================================================================================== 00:08:56.266 [2024-11-19T09:36:43.889Z] Total : 1624.39 101.52 0.00 0.00 38770.80 6213.78 35146.71 00:08:56.524 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:56.524 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:56.524 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:56.524 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:56.524 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:56.524 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:56.524 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:56.524 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:56.524 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:56.524 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:56.524 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:56.524 rmmod nvme_tcp 00:08:56.524 rmmod nvme_fabrics 00:08:56.524 rmmod nvme_keyring 00:08:56.524 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:56.524 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:56.524 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:56.524 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1242894 ']' 00:08:56.524 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1242894 00:08:56.524 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1242894 ']' 00:08:56.524 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1242894 00:08:56.524 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:56.524 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.524 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1242894 00:08:56.782 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:56.782 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:56.782 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1242894' 00:08:56.782 killing process with pid 1242894 00:08:56.782 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1242894 00:08:56.782 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1242894 00:08:57.041 [2024-11-19 10:36:44.406250] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:57.041 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:57.042 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:57.042 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:57.042 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:57.042 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:57.042 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:57.042 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:57.042 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:57.042 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:57.042 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.042 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.042 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.949 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:58.949 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:58.949 00:08:58.949 real 0m8.797s 00:08:58.949 user 0m19.650s 00:08:58.949 sys 0m2.737s 00:08:58.949 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.949 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:58.949 ************************************ 00:08:58.949 END TEST nvmf_host_management 00:08:58.949 ************************************ 00:08:58.949 10:36:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:58.949 10:36:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:58.949 10:36:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.949 10:36:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:58.949 ************************************ 00:08:58.949 START TEST nvmf_lvol 00:08:58.949 ************************************ 00:08:58.949 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:59.209 * Looking for test storage... 00:08:59.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:59.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.209 --rc genhtml_branch_coverage=1 00:08:59.209 --rc genhtml_function_coverage=1 00:08:59.209 --rc genhtml_legend=1 00:08:59.209 --rc geninfo_all_blocks=1 00:08:59.209 --rc geninfo_unexecuted_blocks=1 00:08:59.209 00:08:59.209 ' 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:59.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.209 --rc genhtml_branch_coverage=1 00:08:59.209 --rc genhtml_function_coverage=1 00:08:59.209 --rc genhtml_legend=1 00:08:59.209 --rc geninfo_all_blocks=1 00:08:59.209 --rc geninfo_unexecuted_blocks=1 00:08:59.209 00:08:59.209 ' 00:08:59.209 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:59.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.209 --rc genhtml_branch_coverage=1 00:08:59.209 --rc genhtml_function_coverage=1 00:08:59.209 --rc genhtml_legend=1 00:08:59.209 --rc geninfo_all_blocks=1 00:08:59.210 --rc geninfo_unexecuted_blocks=1 00:08:59.210 00:08:59.210 ' 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:59.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.210 --rc genhtml_branch_coverage=1 00:08:59.210 --rc genhtml_function_coverage=1 00:08:59.210 --rc genhtml_legend=1 00:08:59.210 --rc geninfo_all_blocks=1 00:08:59.210 --rc geninfo_unexecuted_blocks=1 00:08:59.210 00:08:59.210 ' 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:59.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:59.210 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:01.746 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:01.746 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:01.746 Found net devices under 0000:09:00.0: cvl_0_0 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:01.746 Found net devices under 0000:09:00.1: cvl_0_1 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:01.746 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.746 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.746 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.746 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:01.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:09:01.747 00:09:01.747 --- 10.0.0.2 ping statistics --- 00:09:01.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.747 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:09:01.747 00:09:01.747 --- 10.0.0.1 ping statistics --- 00:09:01.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.747 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1245366 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1245366 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1245366 ']' 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.747 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:01.747 [2024-11-19 10:36:49.126570] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:09:01.747 [2024-11-19 10:36:49.126686] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.747 [2024-11-19 10:36:49.198820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:01.747 [2024-11-19 10:36:49.255690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.747 [2024-11-19 10:36:49.255740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.747 [2024-11-19 10:36:49.255769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.747 [2024-11-19 10:36:49.255781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.747 [2024-11-19 10:36:49.255791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.747 [2024-11-19 10:36:49.257243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.747 [2024-11-19 10:36:49.257324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.747 [2024-11-19 10:36:49.257343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.005 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.005 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:02.005 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:02.005 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:02.005 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:02.005 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.005 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:02.263 [2024-11-19 10:36:49.645957] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.263 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:02.523 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:02.523 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:02.781 10:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:02.781 10:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:03.039 10:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:03.297 10:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3b8ff392-fb50-4eff-a2b4-4c88813eb3b8 00:09:03.297 10:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3b8ff392-fb50-4eff-a2b4-4c88813eb3b8 lvol 20 00:09:03.554 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=0f5a58a1-c197-4231-aa1e-04ec5e7d6509 00:09:03.554 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:03.812 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0f5a58a1-c197-4231-aa1e-04ec5e7d6509 00:09:04.070 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:04.328 [2024-11-19 10:36:51.909408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.328 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:04.586 10:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1245796 00:09:04.586 10:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:04.586 10:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:05.960 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 0f5a58a1-c197-4231-aa1e-04ec5e7d6509 MY_SNAPSHOT 00:09:05.960 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ff4159da-e2de-481f-ab59-b287c3ff7b6d 00:09:05.960 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 0f5a58a1-c197-4231-aa1e-04ec5e7d6509 30 00:09:06.218 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ff4159da-e2de-481f-ab59-b287c3ff7b6d MY_CLONE 00:09:06.784 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=03da2c2b-bdd4-470e-b71b-b07ea8fdf627 00:09:06.784 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 03da2c2b-bdd4-470e-b71b-b07ea8fdf627 00:09:07.350 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1245796 00:09:15.458 Initializing NVMe Controllers 00:09:15.458 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:15.458 Controller IO queue size 128, less than required. 00:09:15.458 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:15.458 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:15.458 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:15.458 Initialization complete. Launching workers. 00:09:15.458 ======================================================== 00:09:15.458 Latency(us) 00:09:15.458 Device Information : IOPS MiB/s Average min max 00:09:15.458 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10512.40 41.06 12178.71 1913.58 77067.53 00:09:15.458 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10332.10 40.36 12398.25 2327.50 74683.12 00:09:15.458 ======================================================== 00:09:15.458 Total : 20844.50 81.42 12287.53 1913.58 77067.53 00:09:15.458 00:09:15.458 10:37:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:15.458 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0f5a58a1-c197-4231-aa1e-04ec5e7d6509 00:09:15.717 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3b8ff392-fb50-4eff-a2b4-4c88813eb3b8 00:09:15.975 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:15.975 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:15.975 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:15.975 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:15.975 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:15.975 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:15.975 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:15.975 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:15.975 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:15.975 rmmod nvme_tcp 00:09:15.975 rmmod nvme_fabrics 00:09:16.234 rmmod nvme_keyring 00:09:16.234 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:16.234 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:16.234 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:16.234 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1245366 ']' 00:09:16.234 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1245366 00:09:16.234 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1245366 ']' 00:09:16.234 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1245366 00:09:16.234 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:16.234 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.234 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1245366 00:09:16.234 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.234 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.234 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1245366' 00:09:16.234 killing process with pid 1245366 00:09:16.234 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1245366 00:09:16.234 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1245366 00:09:16.494 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:16.494 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:16.494 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:16.494 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:16.494 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:16.494 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:16.494 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:16.494 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:16.494 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:16.494 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.494 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.494 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.401 10:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:18.401 00:09:18.401 real 0m19.438s 00:09:18.401 user 1m6.216s 00:09:18.401 sys 0m5.501s 00:09:18.401 10:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.401 10:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:18.401 ************************************ 00:09:18.401 END TEST nvmf_lvol 00:09:18.401 ************************************ 00:09:18.401 10:37:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:18.401 10:37:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:18.401 10:37:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.401 10:37:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:18.659 ************************************ 00:09:18.660 START TEST nvmf_lvs_grow 00:09:18.660 ************************************ 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:18.660 * Looking for test storage... 00:09:18.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:18.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.660 --rc genhtml_branch_coverage=1 00:09:18.660 --rc genhtml_function_coverage=1 00:09:18.660 --rc genhtml_legend=1 00:09:18.660 --rc geninfo_all_blocks=1 00:09:18.660 --rc geninfo_unexecuted_blocks=1 00:09:18.660 00:09:18.660 ' 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:18.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.660 --rc genhtml_branch_coverage=1 00:09:18.660 --rc genhtml_function_coverage=1 00:09:18.660 --rc genhtml_legend=1 00:09:18.660 --rc geninfo_all_blocks=1 00:09:18.660 --rc geninfo_unexecuted_blocks=1 00:09:18.660 00:09:18.660 ' 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:18.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.660 --rc genhtml_branch_coverage=1 00:09:18.660 --rc genhtml_function_coverage=1 00:09:18.660 --rc genhtml_legend=1 00:09:18.660 --rc geninfo_all_blocks=1 00:09:18.660 --rc geninfo_unexecuted_blocks=1 00:09:18.660 00:09:18.660 ' 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:18.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.660 --rc genhtml_branch_coverage=1 00:09:18.660 --rc genhtml_function_coverage=1 00:09:18.660 --rc genhtml_legend=1 00:09:18.660 --rc geninfo_all_blocks=1 00:09:18.660 --rc geninfo_unexecuted_blocks=1 00:09:18.660 00:09:18.660 ' 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:18.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:18.660 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:21.197 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:21.197 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:21.197 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:21.197 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:21.197 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:21.197 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:21.197 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:21.197 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:21.197 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:21.197 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:21.197 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:21.197 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:21.197 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:21.197 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:21.197 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:21.197 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:21.197 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:21.197 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:21.197 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:21.197 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:21.197 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:21.198 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:21.198 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:21.198 Found net devices under 0000:09:00.0: cvl_0_0 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:21.198 Found net devices under 0000:09:00.1: cvl_0_1 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:21.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:21.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:09:21.198 00:09:21.198 --- 10.0.0.2 ping statistics --- 00:09:21.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.198 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:21.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:21.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:09:21.198 00:09:21.198 --- 10.0.0.1 ping statistics --- 00:09:21.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.198 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1249197 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1249197 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1249197 ']' 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.198 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.199 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:21.199 [2024-11-19 10:37:08.562568] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:09:21.199 [2024-11-19 10:37:08.562662] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.199 [2024-11-19 10:37:08.632219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.199 [2024-11-19 10:37:08.685117] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.199 [2024-11-19 10:37:08.685173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.199 [2024-11-19 10:37:08.685200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.199 [2024-11-19 10:37:08.685211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.199 [2024-11-19 10:37:08.685220] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.199 [2024-11-19 10:37:08.685851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.199 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.199 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:21.199 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:21.199 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:21.199 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:21.456 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.456 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:21.456 [2024-11-19 10:37:09.062221] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.714 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:21.714 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:21.714 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.714 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:21.714 ************************************ 00:09:21.714 START TEST lvs_grow_clean 00:09:21.714 ************************************ 00:09:21.714 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:21.714 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:21.714 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:21.714 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:21.714 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:21.714 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:21.714 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:21.714 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:21.714 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:21.714 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:21.972 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:21.972 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:22.281 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=86145871-26ff-4b95-a8d3-3bd6166d5772 00:09:22.281 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86145871-26ff-4b95-a8d3-3bd6166d5772 00:09:22.281 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:22.559 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:22.559 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:22.559 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 86145871-26ff-4b95-a8d3-3bd6166d5772 lvol 150 00:09:22.818 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a81f66d1-6d1b-40b1-bf28-93344586589e 00:09:22.818 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:22.818 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:23.076 [2024-11-19 10:37:10.523873] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:23.076 [2024-11-19 10:37:10.523954] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:23.076 true 00:09:23.076 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86145871-26ff-4b95-a8d3-3bd6166d5772 00:09:23.076 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:23.334 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:23.334 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:23.593 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a81f66d1-6d1b-40b1-bf28-93344586589e 00:09:23.851 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:24.110 [2024-11-19 10:37:11.603090] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.110 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:24.368 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1249633 00:09:24.369 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:24.369 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:24.369 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1249633 /var/tmp/bdevperf.sock 00:09:24.369 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1249633 ']' 00:09:24.369 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:24.369 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.369 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:24.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:24.369 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.369 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:24.369 [2024-11-19 10:37:11.929247] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:09:24.369 [2024-11-19 10:37:11.929351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1249633 ] 00:09:24.627 [2024-11-19 10:37:11.996237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.627 [2024-11-19 10:37:12.055127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.627 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.627 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:24.627 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:24.885 Nvme0n1 00:09:25.142 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:25.400 [ 00:09:25.400 { 00:09:25.400 "name": "Nvme0n1", 00:09:25.400 "aliases": [ 00:09:25.400 "a81f66d1-6d1b-40b1-bf28-93344586589e" 00:09:25.400 ], 00:09:25.400 "product_name": "NVMe disk", 00:09:25.400 "block_size": 4096, 00:09:25.400 "num_blocks": 38912, 00:09:25.400 "uuid": "a81f66d1-6d1b-40b1-bf28-93344586589e", 00:09:25.400 "numa_id": 0, 00:09:25.400 "assigned_rate_limits": { 00:09:25.400 "rw_ios_per_sec": 0, 00:09:25.401 "rw_mbytes_per_sec": 0, 00:09:25.401 "r_mbytes_per_sec": 0, 00:09:25.401 "w_mbytes_per_sec": 0 00:09:25.401 }, 00:09:25.401 "claimed": false, 00:09:25.401 "zoned": false, 00:09:25.401 "supported_io_types": { 00:09:25.401 "read": true, 00:09:25.401 "write": true, 00:09:25.401 "unmap": true, 00:09:25.401 "flush": true, 00:09:25.401 "reset": true, 00:09:25.401 "nvme_admin": true, 00:09:25.401 "nvme_io": true, 00:09:25.401 "nvme_io_md": false, 00:09:25.401 "write_zeroes": true, 00:09:25.401 "zcopy": false, 00:09:25.401 "get_zone_info": false, 00:09:25.401 "zone_management": false, 00:09:25.401 "zone_append": false, 00:09:25.401 "compare": true, 00:09:25.401 "compare_and_write": true, 00:09:25.401 "abort": true, 00:09:25.401 "seek_hole": false, 00:09:25.401 "seek_data": false, 00:09:25.401 "copy": true, 00:09:25.401 "nvme_iov_md": false 00:09:25.401 }, 00:09:25.401 "memory_domains": [ 00:09:25.401 { 00:09:25.401 "dma_device_id": "system", 00:09:25.401 "dma_device_type": 1 00:09:25.401 } 00:09:25.401 ], 00:09:25.401 "driver_specific": { 00:09:25.401 "nvme": [ 00:09:25.401 { 00:09:25.401 "trid": { 00:09:25.401 "trtype": "TCP", 00:09:25.401 "adrfam": "IPv4", 00:09:25.401 "traddr": "10.0.0.2", 00:09:25.401 "trsvcid": "4420", 00:09:25.401 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:25.401 }, 00:09:25.401 "ctrlr_data": { 00:09:25.401 "cntlid": 1, 00:09:25.401 "vendor_id": "0x8086", 00:09:25.401 "model_number": "SPDK bdev Controller", 00:09:25.401 "serial_number": "SPDK0", 00:09:25.401 "firmware_revision": "25.01", 00:09:25.401 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:25.401 "oacs": { 00:09:25.401 "security": 0, 00:09:25.401 "format": 0, 00:09:25.401 "firmware": 0, 00:09:25.401 "ns_manage": 0 00:09:25.401 }, 00:09:25.401 "multi_ctrlr": true, 00:09:25.401 "ana_reporting": false 00:09:25.401 }, 00:09:25.401 "vs": { 00:09:25.401 "nvme_version": "1.3" 00:09:25.401 }, 00:09:25.401 "ns_data": { 00:09:25.401 "id": 1, 00:09:25.401 "can_share": true 00:09:25.401 } 00:09:25.401 } 00:09:25.401 ], 00:09:25.401 "mp_policy": "active_passive" 00:09:25.401 } 00:09:25.401 } 00:09:25.401 ] 00:09:25.401 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1249662 00:09:25.401 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:25.401 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:25.401 Running I/O for 10 seconds... 00:09:26.335 Latency(us) 00:09:26.335 [2024-11-19T09:37:13.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.335 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.335 Nvme0n1 : 1.00 14987.00 58.54 0.00 0.00 0.00 0.00 0.00 00:09:26.335 [2024-11-19T09:37:13.958Z] =================================================================================================================== 00:09:26.335 [2024-11-19T09:37:13.958Z] Total : 14987.00 58.54 0.00 0.00 0.00 0.00 0.00 00:09:26.335 00:09:27.270 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 86145871-26ff-4b95-a8d3-3bd6166d5772 00:09:27.529 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.529 Nvme0n1 : 2.00 15130.50 59.10 0.00 0.00 0.00 0.00 0.00 00:09:27.529 [2024-11-19T09:37:15.152Z] =================================================================================================================== 00:09:27.529 [2024-11-19T09:37:15.152Z] Total : 15130.50 59.10 0.00 0.00 0.00 0.00 0.00 00:09:27.529 00:09:27.529 true 00:09:27.529 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86145871-26ff-4b95-a8d3-3bd6166d5772 00:09:27.529 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:27.787 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:27.787 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:27.787 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1249662 00:09:28.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.353 Nvme0n1 : 3.00 15189.67 59.33 0.00 0.00 0.00 0.00 0.00 00:09:28.353 [2024-11-19T09:37:15.976Z] =================================================================================================================== 00:09:28.353 [2024-11-19T09:37:15.976Z] Total : 15189.67 59.33 0.00 0.00 0.00 0.00 0.00 00:09:28.353 00:09:29.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.728 Nvme0n1 : 4.00 15281.75 59.69 0.00 0.00 0.00 0.00 0.00 00:09:29.728 [2024-11-19T09:37:17.351Z] =================================================================================================================== 00:09:29.728 [2024-11-19T09:37:17.351Z] Total : 15281.75 59.69 0.00 0.00 0.00 0.00 0.00 00:09:29.728 00:09:30.294 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.294 Nvme0n1 : 5.00 15349.80 59.96 0.00 0.00 0.00 0.00 0.00 00:09:30.294 [2024-11-19T09:37:17.917Z] =================================================================================================================== 00:09:30.294 [2024-11-19T09:37:17.917Z] Total : 15349.80 59.96 0.00 0.00 0.00 0.00 0.00 00:09:30.294 00:09:31.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.667 Nvme0n1 : 6.00 15405.83 60.18 0.00 0.00 0.00 0.00 0.00 00:09:31.667 [2024-11-19T09:37:19.290Z] =================================================================================================================== 00:09:31.667 [2024-11-19T09:37:19.290Z] Total : 15405.83 60.18 0.00 0.00 0.00 0.00 0.00 00:09:31.667 00:09:32.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.602 Nvme0n1 : 7.00 15454.71 60.37 0.00 0.00 0.00 0.00 0.00 00:09:32.602 [2024-11-19T09:37:20.225Z] =================================================================================================================== 00:09:32.602 [2024-11-19T09:37:20.225Z] Total : 15454.71 60.37 0.00 0.00 0.00 0.00 0.00 00:09:32.602 00:09:33.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.536 Nvme0n1 : 8.00 15475.50 60.45 0.00 0.00 0.00 0.00 0.00 00:09:33.536 [2024-11-19T09:37:21.159Z] =================================================================================================================== 00:09:33.536 [2024-11-19T09:37:21.159Z] Total : 15475.50 60.45 0.00 0.00 0.00 0.00 0.00 00:09:33.536 00:09:34.469 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.469 Nvme0n1 : 9.00 15505.78 60.57 0.00 0.00 0.00 0.00 0.00 00:09:34.469 [2024-11-19T09:37:22.092Z] =================================================================================================================== 00:09:34.469 [2024-11-19T09:37:22.092Z] Total : 15505.78 60.57 0.00 0.00 0.00 0.00 0.00 00:09:34.469 00:09:35.402 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.402 Nvme0n1 : 10.00 15530.00 60.66 0.00 0.00 0.00 0.00 0.00 00:09:35.402 [2024-11-19T09:37:23.025Z] =================================================================================================================== 00:09:35.402 [2024-11-19T09:37:23.025Z] Total : 15530.00 60.66 0.00 0.00 0.00 0.00 0.00 00:09:35.402 00:09:35.402 00:09:35.402 Latency(us) 00:09:35.402 [2024-11-19T09:37:23.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.402 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.402 Nvme0n1 : 10.01 15531.97 60.67 0.00 0.00 8236.32 2876.30 15922.82 00:09:35.402 [2024-11-19T09:37:23.025Z] =================================================================================================================== 00:09:35.402 [2024-11-19T09:37:23.025Z] Total : 15531.97 60.67 0.00 0.00 8236.32 2876.30 15922.82 00:09:35.402 { 00:09:35.402 "results": [ 00:09:35.402 { 00:09:35.402 "job": "Nvme0n1", 00:09:35.402 "core_mask": "0x2", 00:09:35.402 "workload": "randwrite", 00:09:35.402 "status": "finished", 00:09:35.402 "queue_depth": 128, 00:09:35.402 "io_size": 4096, 00:09:35.402 "runtime": 10.006974, 00:09:35.402 "iops": 15531.968005512956, 00:09:35.402 "mibps": 60.67175002153498, 00:09:35.402 "io_failed": 0, 00:09:35.402 "io_timeout": 0, 00:09:35.402 "avg_latency_us": 8236.323740819853, 00:09:35.402 "min_latency_us": 2876.302222222222, 00:09:35.402 "max_latency_us": 15922.82074074074 00:09:35.402 } 00:09:35.402 ], 00:09:35.402 "core_count": 1 00:09:35.402 } 00:09:35.402 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1249633 00:09:35.402 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1249633 ']' 00:09:35.402 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1249633 00:09:35.402 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:35.402 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.402 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1249633 00:09:35.402 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:35.402 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:35.402 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1249633' 00:09:35.402 killing process with pid 1249633 00:09:35.402 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1249633 00:09:35.402 Received shutdown signal, test time was about 10.000000 seconds 00:09:35.402 00:09:35.402 Latency(us) 00:09:35.402 [2024-11-19T09:37:23.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.402 [2024-11-19T09:37:23.025Z] =================================================================================================================== 00:09:35.402 [2024-11-19T09:37:23.025Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:35.402 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1249633 00:09:35.659 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:35.917 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:36.175 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86145871-26ff-4b95-a8d3-3bd6166d5772 00:09:36.175 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:36.433 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:36.433 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:36.433 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:36.691 [2024-11-19 10:37:24.244037] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:36.691 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86145871-26ff-4b95-a8d3-3bd6166d5772 00:09:36.691 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:36.691 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86145871-26ff-4b95-a8d3-3bd6166d5772 00:09:36.691 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.691 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.691 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.691 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.691 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.691 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.691 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.691 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:36.691 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86145871-26ff-4b95-a8d3-3bd6166d5772 00:09:36.949 request: 00:09:36.949 { 00:09:36.949 "uuid": "86145871-26ff-4b95-a8d3-3bd6166d5772", 00:09:36.949 "method": "bdev_lvol_get_lvstores", 00:09:36.949 "req_id": 1 00:09:36.949 } 00:09:36.949 Got JSON-RPC error response 00:09:36.949 response: 00:09:36.949 { 00:09:36.949 "code": -19, 00:09:36.949 "message": "No such device" 00:09:36.949 } 00:09:37.207 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:37.207 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:37.207 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:37.207 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:37.207 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:37.207 aio_bdev 00:09:37.465 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a81f66d1-6d1b-40b1-bf28-93344586589e 00:09:37.465 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=a81f66d1-6d1b-40b1-bf28-93344586589e 00:09:37.465 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.465 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:37.465 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.465 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.465 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:37.722 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a81f66d1-6d1b-40b1-bf28-93344586589e -t 2000 00:09:37.981 [ 00:09:37.981 { 00:09:37.981 "name": "a81f66d1-6d1b-40b1-bf28-93344586589e", 00:09:37.981 "aliases": [ 00:09:37.981 "lvs/lvol" 00:09:37.981 ], 00:09:37.981 "product_name": "Logical Volume", 00:09:37.981 "block_size": 4096, 00:09:37.981 "num_blocks": 38912, 00:09:37.981 "uuid": "a81f66d1-6d1b-40b1-bf28-93344586589e", 00:09:37.981 "assigned_rate_limits": { 00:09:37.981 "rw_ios_per_sec": 0, 00:09:37.981 "rw_mbytes_per_sec": 0, 00:09:37.981 "r_mbytes_per_sec": 0, 00:09:37.981 "w_mbytes_per_sec": 0 00:09:37.981 }, 00:09:37.981 "claimed": false, 00:09:37.981 "zoned": false, 00:09:37.981 "supported_io_types": { 00:09:37.981 "read": true, 00:09:37.981 "write": true, 00:09:37.981 "unmap": true, 00:09:37.981 "flush": false, 00:09:37.981 "reset": true, 00:09:37.981 "nvme_admin": false, 00:09:37.981 "nvme_io": false, 00:09:37.981 "nvme_io_md": false, 00:09:37.981 "write_zeroes": true, 00:09:37.981 "zcopy": false, 00:09:37.981 "get_zone_info": false, 00:09:37.981 "zone_management": false, 00:09:37.981 "zone_append": false, 00:09:37.981 "compare": false, 00:09:37.981 "compare_and_write": false, 00:09:37.981 "abort": false, 00:09:37.981 "seek_hole": true, 00:09:37.981 "seek_data": true, 00:09:37.981 "copy": false, 00:09:37.981 "nvme_iov_md": false 00:09:37.981 }, 00:09:37.981 "driver_specific": { 00:09:37.981 "lvol": { 00:09:37.981 "lvol_store_uuid": "86145871-26ff-4b95-a8d3-3bd6166d5772", 00:09:37.981 "base_bdev": "aio_bdev", 00:09:37.981 "thin_provision": false, 00:09:37.981 "num_allocated_clusters": 38, 00:09:37.981 "snapshot": false, 00:09:37.981 "clone": false, 00:09:37.981 "esnap_clone": false 00:09:37.981 } 00:09:37.981 } 00:09:37.981 } 00:09:37.981 ] 00:09:37.981 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:37.981 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86145871-26ff-4b95-a8d3-3bd6166d5772 00:09:37.981 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:38.239 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:38.239 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86145871-26ff-4b95-a8d3-3bd6166d5772 00:09:38.239 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:38.497 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:38.497 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a81f66d1-6d1b-40b1-bf28-93344586589e 00:09:38.754 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 86145871-26ff-4b95-a8d3-3bd6166d5772 00:09:39.012 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:39.271 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:39.271 00:09:39.271 real 0m17.647s 00:09:39.271 user 0m17.138s 00:09:39.271 sys 0m1.883s 00:09:39.271 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.271 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:39.271 ************************************ 00:09:39.271 END TEST lvs_grow_clean 00:09:39.271 ************************************ 00:09:39.271 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:39.271 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:39.271 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.271 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:39.271 ************************************ 00:09:39.271 START TEST lvs_grow_dirty 00:09:39.271 ************************************ 00:09:39.271 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:39.271 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:39.271 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:39.271 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:39.271 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:39.271 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:39.271 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:39.271 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:39.271 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:39.271 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:39.528 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:39.528 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:39.787 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5cee0d64-a568-4b2c-97a4-ca2f7428a1ec 00:09:39.787 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cee0d64-a568-4b2c-97a4-ca2f7428a1ec 00:09:39.787 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:40.045 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:40.045 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:40.045 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5cee0d64-a568-4b2c-97a4-ca2f7428a1ec lvol 150 00:09:40.303 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=baa595d8-e6e4-42da-a7bc-0700b9a42d46 00:09:40.303 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:40.303 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:40.561 [2024-11-19 10:37:28.145690] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:40.561 [2024-11-19 10:37:28.145790] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:40.561 true 00:09:40.561 10:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cee0d64-a568-4b2c-97a4-ca2f7428a1ec 00:09:40.561 10:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:40.818 10:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:40.818 10:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:41.076 10:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 baa595d8-e6e4-42da-a7bc-0700b9a42d46 00:09:41.334 10:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:41.591 [2024-11-19 10:37:29.196761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.849 10:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:42.107 10:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1251715 00:09:42.107 10:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:42.107 10:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:42.107 10:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1251715 /var/tmp/bdevperf.sock 00:09:42.107 10:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1251715 ']' 00:09:42.107 10:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:42.107 10:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.107 10:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:42.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:42.107 10:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.107 10:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:42.107 [2024-11-19 10:37:29.519250] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:09:42.107 [2024-11-19 10:37:29.519342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1251715 ] 00:09:42.107 [2024-11-19 10:37:29.585015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.107 [2024-11-19 10:37:29.646204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.365 10:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.365 10:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:42.365 10:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:42.929 Nvme0n1 00:09:42.929 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:42.929 [ 00:09:42.929 { 00:09:42.929 "name": "Nvme0n1", 00:09:42.929 "aliases": [ 00:09:42.929 "baa595d8-e6e4-42da-a7bc-0700b9a42d46" 00:09:42.929 ], 00:09:42.929 "product_name": "NVMe disk", 00:09:42.929 "block_size": 4096, 00:09:42.929 "num_blocks": 38912, 00:09:42.929 "uuid": "baa595d8-e6e4-42da-a7bc-0700b9a42d46", 00:09:42.929 "numa_id": 0, 00:09:42.929 "assigned_rate_limits": { 00:09:42.929 "rw_ios_per_sec": 0, 00:09:42.929 "rw_mbytes_per_sec": 0, 00:09:42.929 "r_mbytes_per_sec": 0, 00:09:42.929 "w_mbytes_per_sec": 0 00:09:42.929 }, 00:09:42.929 "claimed": false, 00:09:42.929 "zoned": false, 00:09:42.929 "supported_io_types": { 00:09:42.929 "read": true, 00:09:42.929 "write": true, 00:09:42.929 "unmap": true, 00:09:42.929 "flush": true, 00:09:42.929 "reset": true, 00:09:42.929 "nvme_admin": true, 00:09:42.929 "nvme_io": true, 00:09:42.929 "nvme_io_md": false, 00:09:42.929 "write_zeroes": true, 00:09:42.929 "zcopy": false, 00:09:42.929 "get_zone_info": false, 00:09:42.929 "zone_management": false, 00:09:42.929 "zone_append": false, 00:09:42.929 "compare": true, 00:09:42.929 "compare_and_write": true, 00:09:42.929 "abort": true, 00:09:42.929 "seek_hole": false, 00:09:42.929 "seek_data": false, 00:09:42.929 "copy": true, 00:09:42.929 "nvme_iov_md": false 00:09:42.929 }, 00:09:42.929 "memory_domains": [ 00:09:42.929 { 00:09:42.929 "dma_device_id": "system", 00:09:42.929 "dma_device_type": 1 00:09:42.929 } 00:09:42.929 ], 00:09:42.929 "driver_specific": { 00:09:42.929 "nvme": [ 00:09:42.929 { 00:09:42.929 "trid": { 00:09:42.929 "trtype": "TCP", 00:09:42.929 "adrfam": "IPv4", 00:09:42.929 "traddr": "10.0.0.2", 00:09:42.929 "trsvcid": "4420", 00:09:42.929 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:42.929 }, 00:09:42.929 "ctrlr_data": { 00:09:42.929 "cntlid": 1, 00:09:42.929 "vendor_id": "0x8086", 00:09:42.929 "model_number": "SPDK bdev Controller", 00:09:42.929 "serial_number": "SPDK0", 00:09:42.929 "firmware_revision": "25.01", 00:09:42.929 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:42.929 "oacs": { 00:09:42.929 "security": 0, 00:09:42.929 "format": 0, 00:09:42.929 "firmware": 0, 00:09:42.929 "ns_manage": 0 00:09:42.929 }, 00:09:42.929 "multi_ctrlr": true, 00:09:42.929 "ana_reporting": false 00:09:42.929 }, 00:09:42.929 "vs": { 00:09:42.929 "nvme_version": "1.3" 00:09:42.929 }, 00:09:42.929 "ns_data": { 00:09:42.929 "id": 1, 00:09:42.929 "can_share": true 00:09:42.929 } 00:09:42.929 } 00:09:42.929 ], 00:09:42.929 "mp_policy": "active_passive" 00:09:42.929 } 00:09:42.929 } 00:09:42.929 ] 00:09:42.929 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1251851 00:09:42.929 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:42.930 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:43.187 Running I/O for 10 seconds... 00:09:44.118 Latency(us) 00:09:44.118 [2024-11-19T09:37:31.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.118 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.118 Nvme0n1 : 1.00 15058.00 58.82 0.00 0.00 0.00 0.00 0.00 00:09:44.118 [2024-11-19T09:37:31.741Z] =================================================================================================================== 00:09:44.118 [2024-11-19T09:37:31.742Z] Total : 15058.00 58.82 0.00 0.00 0.00 0.00 0.00 00:09:44.119 00:09:45.050 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5cee0d64-a568-4b2c-97a4-ca2f7428a1ec 00:09:45.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.050 Nvme0n1 : 2.00 15245.00 59.55 0.00 0.00 0.00 0.00 0.00 00:09:45.050 [2024-11-19T09:37:32.673Z] =================================================================================================================== 00:09:45.050 [2024-11-19T09:37:32.673Z] Total : 15245.00 59.55 0.00 0.00 0.00 0.00 0.00 00:09:45.050 00:09:45.308 true 00:09:45.308 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cee0d64-a568-4b2c-97a4-ca2f7428a1ec 00:09:45.308 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:45.566 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:45.566 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:45.566 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1251851 00:09:46.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.131 Nvme0n1 : 3.00 15349.67 59.96 0.00 0.00 0.00 0.00 0.00 00:09:46.131 [2024-11-19T09:37:33.754Z] =================================================================================================================== 00:09:46.131 [2024-11-19T09:37:33.754Z] Total : 15349.67 59.96 0.00 0.00 0.00 0.00 0.00 00:09:46.131 00:09:47.064 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.064 Nvme0n1 : 4.00 15419.25 60.23 0.00 0.00 0.00 0.00 0.00 00:09:47.064 [2024-11-19T09:37:34.687Z] =================================================================================================================== 00:09:47.064 [2024-11-19T09:37:34.687Z] Total : 15419.25 60.23 0.00 0.00 0.00 0.00 0.00 00:09:47.064 00:09:48.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.437 Nvme0n1 : 5.00 15498.60 60.54 0.00 0.00 0.00 0.00 0.00 00:09:48.437 [2024-11-19T09:37:36.060Z] =================================================================================================================== 00:09:48.437 [2024-11-19T09:37:36.060Z] Total : 15498.60 60.54 0.00 0.00 0.00 0.00 0.00 00:09:48.437 00:09:49.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.370 Nvme0n1 : 6.00 15551.50 60.75 0.00 0.00 0.00 0.00 0.00 00:09:49.370 [2024-11-19T09:37:36.993Z] =================================================================================================================== 00:09:49.370 [2024-11-19T09:37:36.993Z] Total : 15551.50 60.75 0.00 0.00 0.00 0.00 0.00 00:09:49.370 00:09:50.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.304 Nvme0n1 : 7.00 15599.00 60.93 0.00 0.00 0.00 0.00 0.00 00:09:50.304 [2024-11-19T09:37:37.927Z] =================================================================================================================== 00:09:50.304 [2024-11-19T09:37:37.927Z] Total : 15599.00 60.93 0.00 0.00 0.00 0.00 0.00 00:09:50.304 00:09:51.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.237 Nvme0n1 : 8.00 15635.38 61.08 0.00 0.00 0.00 0.00 0.00 00:09:51.237 [2024-11-19T09:37:38.860Z] =================================================================================================================== 00:09:51.237 [2024-11-19T09:37:38.860Z] Total : 15635.38 61.08 0.00 0.00 0.00 0.00 0.00 00:09:51.237 00:09:52.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.169 Nvme0n1 : 9.00 15662.67 61.18 0.00 0.00 0.00 0.00 0.00 00:09:52.169 [2024-11-19T09:37:39.792Z] =================================================================================================================== 00:09:52.169 [2024-11-19T09:37:39.792Z] Total : 15662.67 61.18 0.00 0.00 0.00 0.00 0.00 00:09:52.169 00:09:53.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.100 Nvme0n1 : 10.00 15672.70 61.22 0.00 0.00 0.00 0.00 0.00 00:09:53.100 [2024-11-19T09:37:40.723Z] =================================================================================================================== 00:09:53.100 [2024-11-19T09:37:40.723Z] Total : 15672.70 61.22 0.00 0.00 0.00 0.00 0.00 00:09:53.100 00:09:53.100 00:09:53.100 Latency(us) 00:09:53.100 [2024-11-19T09:37:40.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:53.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.100 Nvme0n1 : 10.01 15673.04 61.22 0.00 0.00 8162.16 3325.35 15340.28 00:09:53.100 [2024-11-19T09:37:40.723Z] =================================================================================================================== 00:09:53.100 [2024-11-19T09:37:40.723Z] Total : 15673.04 61.22 0.00 0.00 8162.16 3325.35 15340.28 00:09:53.100 { 00:09:53.100 "results": [ 00:09:53.100 { 00:09:53.100 "job": "Nvme0n1", 00:09:53.100 "core_mask": "0x2", 00:09:53.100 "workload": "randwrite", 00:09:53.100 "status": "finished", 00:09:53.100 "queue_depth": 128, 00:09:53.100 "io_size": 4096, 00:09:53.100 "runtime": 10.007948, 00:09:53.100 "iops": 15673.043065371643, 00:09:53.100 "mibps": 61.22282447410798, 00:09:53.100 "io_failed": 0, 00:09:53.100 "io_timeout": 0, 00:09:53.100 "avg_latency_us": 8162.162607418742, 00:09:53.100 "min_latency_us": 3325.345185185185, 00:09:53.100 "max_latency_us": 15340.278518518518 00:09:53.100 } 00:09:53.100 ], 00:09:53.100 "core_count": 1 00:09:53.100 } 00:09:53.100 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1251715 00:09:53.100 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1251715 ']' 00:09:53.100 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1251715 00:09:53.100 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:53.100 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.100 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1251715 00:09:53.358 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:53.358 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:53.358 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1251715' 00:09:53.358 killing process with pid 1251715 00:09:53.358 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1251715 00:09:53.358 Received shutdown signal, test time was about 10.000000 seconds 00:09:53.358 00:09:53.358 Latency(us) 00:09:53.358 [2024-11-19T09:37:40.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:53.358 [2024-11-19T09:37:40.981Z] =================================================================================================================== 00:09:53.358 [2024-11-19T09:37:40.981Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:53.358 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1251715 00:09:53.358 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:53.615 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:54.181 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cee0d64-a568-4b2c-97a4-ca2f7428a1ec 00:09:54.181 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:54.181 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:54.181 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:54.181 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1249197 00:09:54.181 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1249197 00:09:54.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1249197 Killed "${NVMF_APP[@]}" "$@" 00:09:54.438 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:54.438 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:54.438 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:54.438 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:54.438 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:54.438 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1253187 00:09:54.438 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:54.438 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1253187 00:09:54.438 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1253187 ']' 00:09:54.438 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.438 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.438 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.438 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.438 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:54.439 [2024-11-19 10:37:41.883563] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:09:54.439 [2024-11-19 10:37:41.883658] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.439 [2024-11-19 10:37:41.956727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.439 [2024-11-19 10:37:42.014819] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.439 [2024-11-19 10:37:42.014887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.439 [2024-11-19 10:37:42.014901] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.439 [2024-11-19 10:37:42.014912] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.439 [2024-11-19 10:37:42.014944] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.439 [2024-11-19 10:37:42.015555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.697 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.697 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:54.697 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:54.697 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:54.697 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:54.697 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.697 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:54.955 [2024-11-19 10:37:42.401371] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:54.955 [2024-11-19 10:37:42.401509] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:54.955 [2024-11-19 10:37:42.401562] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:54.955 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:54.955 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev baa595d8-e6e4-42da-a7bc-0700b9a42d46 00:09:54.955 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=baa595d8-e6e4-42da-a7bc-0700b9a42d46 00:09:54.955 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.955 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:54.955 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.955 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.955 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:55.252 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b baa595d8-e6e4-42da-a7bc-0700b9a42d46 -t 2000 00:09:55.535 [ 00:09:55.535 { 00:09:55.535 "name": "baa595d8-e6e4-42da-a7bc-0700b9a42d46", 00:09:55.535 "aliases": [ 00:09:55.535 "lvs/lvol" 00:09:55.535 ], 00:09:55.535 "product_name": "Logical Volume", 00:09:55.535 "block_size": 4096, 00:09:55.535 "num_blocks": 38912, 00:09:55.535 "uuid": "baa595d8-e6e4-42da-a7bc-0700b9a42d46", 00:09:55.535 "assigned_rate_limits": { 00:09:55.535 "rw_ios_per_sec": 0, 00:09:55.535 "rw_mbytes_per_sec": 0, 00:09:55.535 "r_mbytes_per_sec": 0, 00:09:55.535 "w_mbytes_per_sec": 0 00:09:55.535 }, 00:09:55.536 "claimed": false, 00:09:55.536 "zoned": false, 00:09:55.536 "supported_io_types": { 00:09:55.536 "read": true, 00:09:55.536 "write": true, 00:09:55.536 "unmap": true, 00:09:55.536 "flush": false, 00:09:55.536 "reset": true, 00:09:55.536 "nvme_admin": false, 00:09:55.536 "nvme_io": false, 00:09:55.536 "nvme_io_md": false, 00:09:55.536 "write_zeroes": true, 00:09:55.536 "zcopy": false, 00:09:55.536 "get_zone_info": false, 00:09:55.536 "zone_management": false, 00:09:55.536 "zone_append": false, 00:09:55.536 "compare": false, 00:09:55.536 "compare_and_write": false, 00:09:55.536 "abort": false, 00:09:55.536 "seek_hole": true, 00:09:55.536 "seek_data": true, 00:09:55.536 "copy": false, 00:09:55.536 "nvme_iov_md": false 00:09:55.536 }, 00:09:55.536 "driver_specific": { 00:09:55.536 "lvol": { 00:09:55.536 "lvol_store_uuid": "5cee0d64-a568-4b2c-97a4-ca2f7428a1ec", 00:09:55.536 "base_bdev": "aio_bdev", 00:09:55.536 "thin_provision": false, 00:09:55.536 "num_allocated_clusters": 38, 00:09:55.536 "snapshot": false, 00:09:55.536 "clone": false, 00:09:55.536 "esnap_clone": false 00:09:55.536 } 00:09:55.536 } 00:09:55.536 } 00:09:55.536 ] 00:09:55.536 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:55.536 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cee0d64-a568-4b2c-97a4-ca2f7428a1ec 00:09:55.536 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:55.794 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:55.794 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cee0d64-a568-4b2c-97a4-ca2f7428a1ec 00:09:55.794 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:56.051 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:56.051 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:56.319 [2024-11-19 10:37:43.778980] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:56.319 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cee0d64-a568-4b2c-97a4-ca2f7428a1ec 00:09:56.319 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:56.319 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cee0d64-a568-4b2c-97a4-ca2f7428a1ec 00:09:56.319 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:56.319 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:56.319 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:56.319 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:56.319 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:56.319 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:56.319 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:56.319 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:56.319 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cee0d64-a568-4b2c-97a4-ca2f7428a1ec 00:09:56.577 request: 00:09:56.577 { 00:09:56.577 "uuid": "5cee0d64-a568-4b2c-97a4-ca2f7428a1ec", 00:09:56.577 "method": "bdev_lvol_get_lvstores", 00:09:56.577 "req_id": 1 00:09:56.577 } 00:09:56.577 Got JSON-RPC error response 00:09:56.577 response: 00:09:56.577 { 00:09:56.577 "code": -19, 00:09:56.577 "message": "No such device" 00:09:56.577 } 00:09:56.577 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:56.577 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:56.577 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:56.577 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:56.577 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:56.835 aio_bdev 00:09:56.835 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev baa595d8-e6e4-42da-a7bc-0700b9a42d46 00:09:56.835 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=baa595d8-e6e4-42da-a7bc-0700b9a42d46 00:09:56.835 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:56.835 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:56.835 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:56.835 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:56.835 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:57.093 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b baa595d8-e6e4-42da-a7bc-0700b9a42d46 -t 2000 00:09:57.351 [ 00:09:57.351 { 00:09:57.351 "name": "baa595d8-e6e4-42da-a7bc-0700b9a42d46", 00:09:57.351 "aliases": [ 00:09:57.351 "lvs/lvol" 00:09:57.351 ], 00:09:57.351 "product_name": "Logical Volume", 00:09:57.351 "block_size": 4096, 00:09:57.351 "num_blocks": 38912, 00:09:57.351 "uuid": "baa595d8-e6e4-42da-a7bc-0700b9a42d46", 00:09:57.351 "assigned_rate_limits": { 00:09:57.351 "rw_ios_per_sec": 0, 00:09:57.351 "rw_mbytes_per_sec": 0, 00:09:57.351 "r_mbytes_per_sec": 0, 00:09:57.351 "w_mbytes_per_sec": 0 00:09:57.351 }, 00:09:57.351 "claimed": false, 00:09:57.351 "zoned": false, 00:09:57.351 "supported_io_types": { 00:09:57.351 "read": true, 00:09:57.351 "write": true, 00:09:57.351 "unmap": true, 00:09:57.351 "flush": false, 00:09:57.351 "reset": true, 00:09:57.351 "nvme_admin": false, 00:09:57.351 "nvme_io": false, 00:09:57.351 "nvme_io_md": false, 00:09:57.351 "write_zeroes": true, 00:09:57.351 "zcopy": false, 00:09:57.351 "get_zone_info": false, 00:09:57.351 "zone_management": false, 00:09:57.351 "zone_append": false, 00:09:57.351 "compare": false, 00:09:57.351 "compare_and_write": false, 00:09:57.351 "abort": false, 00:09:57.351 "seek_hole": true, 00:09:57.351 "seek_data": true, 00:09:57.351 "copy": false, 00:09:57.351 "nvme_iov_md": false 00:09:57.351 }, 00:09:57.351 "driver_specific": { 00:09:57.351 "lvol": { 00:09:57.351 "lvol_store_uuid": "5cee0d64-a568-4b2c-97a4-ca2f7428a1ec", 00:09:57.351 "base_bdev": "aio_bdev", 00:09:57.351 "thin_provision": false, 00:09:57.351 "num_allocated_clusters": 38, 00:09:57.351 "snapshot": false, 00:09:57.351 "clone": false, 00:09:57.351 "esnap_clone": false 00:09:57.351 } 00:09:57.351 } 00:09:57.351 } 00:09:57.351 ] 00:09:57.351 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:57.351 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cee0d64-a568-4b2c-97a4-ca2f7428a1ec 00:09:57.351 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:57.609 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:57.609 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cee0d64-a568-4b2c-97a4-ca2f7428a1ec 00:09:57.609 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:57.867 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:57.867 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete baa595d8-e6e4-42da-a7bc-0700b9a42d46 00:09:58.124 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5cee0d64-a568-4b2c-97a4-ca2f7428a1ec 00:09:58.382 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:58.639 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:58.897 00:09:58.897 real 0m19.462s 00:09:58.897 user 0m48.480s 00:09:58.897 sys 0m4.829s 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:58.897 ************************************ 00:09:58.897 END TEST lvs_grow_dirty 00:09:58.897 ************************************ 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:58.897 nvmf_trace.0 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:58.897 rmmod nvme_tcp 00:09:58.897 rmmod nvme_fabrics 00:09:58.897 rmmod nvme_keyring 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1253187 ']' 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1253187 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1253187 ']' 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1253187 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1253187 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1253187' 00:09:58.897 killing process with pid 1253187 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1253187 00:09:58.897 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1253187 00:09:59.156 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:59.156 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:59.156 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:59.156 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:59.156 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:59.156 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:59.156 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:59.156 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:59.156 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:59.156 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.156 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.156 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.062 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:01.320 00:10:01.320 real 0m42.656s 00:10:01.320 user 1m11.680s 00:10:01.320 sys 0m8.709s 00:10:01.320 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.320 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:01.320 ************************************ 00:10:01.320 END TEST nvmf_lvs_grow 00:10:01.320 ************************************ 00:10:01.320 10:37:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:01.320 10:37:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:01.320 10:37:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.320 10:37:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:01.320 ************************************ 00:10:01.320 START TEST nvmf_bdev_io_wait 00:10:01.320 ************************************ 00:10:01.320 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:01.320 * Looking for test storage... 00:10:01.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:01.320 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:01.320 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:10:01.320 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:01.320 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:01.320 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.320 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.320 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.320 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.320 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.320 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.320 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:01.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.321 --rc genhtml_branch_coverage=1 00:10:01.321 --rc genhtml_function_coverage=1 00:10:01.321 --rc genhtml_legend=1 00:10:01.321 --rc geninfo_all_blocks=1 00:10:01.321 --rc geninfo_unexecuted_blocks=1 00:10:01.321 00:10:01.321 ' 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:01.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.321 --rc genhtml_branch_coverage=1 00:10:01.321 --rc genhtml_function_coverage=1 00:10:01.321 --rc genhtml_legend=1 00:10:01.321 --rc geninfo_all_blocks=1 00:10:01.321 --rc geninfo_unexecuted_blocks=1 00:10:01.321 00:10:01.321 ' 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:01.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.321 --rc genhtml_branch_coverage=1 00:10:01.321 --rc genhtml_function_coverage=1 00:10:01.321 --rc genhtml_legend=1 00:10:01.321 --rc geninfo_all_blocks=1 00:10:01.321 --rc geninfo_unexecuted_blocks=1 00:10:01.321 00:10:01.321 ' 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:01.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.321 --rc genhtml_branch_coverage=1 00:10:01.321 --rc genhtml_function_coverage=1 00:10:01.321 --rc genhtml_legend=1 00:10:01.321 --rc geninfo_all_blocks=1 00:10:01.321 --rc geninfo_unexecuted_blocks=1 00:10:01.321 00:10:01.321 ' 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:01.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.321 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.322 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:01.322 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:01.322 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:10:01.322 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:03.848 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:03.848 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:03.848 Found net devices under 0000:09:00.0: cvl_0_0 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:03.848 Found net devices under 0000:09:00.1: cvl_0_1 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:03.848 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:03.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:10:03.849 00:10:03.849 --- 10.0.0.2 ping statistics --- 00:10:03.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.849 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:03.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:10:03.849 00:10:03.849 --- 10.0.0.1 ping statistics --- 00:10:03.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.849 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1255729 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1255729 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1255729 ']' 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.849 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.849 [2024-11-19 10:37:51.299969] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:10:03.849 [2024-11-19 10:37:51.300060] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.849 [2024-11-19 10:37:51.375297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:03.849 [2024-11-19 10:37:51.437409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.849 [2024-11-19 10:37:51.437457] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.849 [2024-11-19 10:37:51.437485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.849 [2024-11-19 10:37:51.437496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.849 [2024-11-19 10:37:51.437506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.849 [2024-11-19 10:37:51.439316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.849 [2024-11-19 10:37:51.440325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.849 [2024-11-19 10:37:51.440395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.849 [2024-11-19 10:37:51.440398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:04.108 [2024-11-19 10:37:51.647485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:04.108 Malloc0 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:04.108 [2024-11-19 10:37:51.701537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1255872 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1255874 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:04.108 { 00:10:04.108 "params": { 00:10:04.108 "name": "Nvme$subsystem", 00:10:04.108 "trtype": "$TEST_TRANSPORT", 00:10:04.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:04.108 "adrfam": "ipv4", 00:10:04.108 "trsvcid": "$NVMF_PORT", 00:10:04.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:04.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:04.108 "hdgst": ${hdgst:-false}, 00:10:04.108 "ddgst": ${ddgst:-false} 00:10:04.108 }, 00:10:04.108 "method": "bdev_nvme_attach_controller" 00:10:04.108 } 00:10:04.108 EOF 00:10:04.108 )") 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1255876 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:04.108 { 00:10:04.108 "params": { 00:10:04.108 "name": "Nvme$subsystem", 00:10:04.108 "trtype": "$TEST_TRANSPORT", 00:10:04.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:04.108 "adrfam": "ipv4", 00:10:04.108 "trsvcid": "$NVMF_PORT", 00:10:04.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:04.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:04.108 "hdgst": ${hdgst:-false}, 00:10:04.108 "ddgst": ${ddgst:-false} 00:10:04.108 }, 00:10:04.108 "method": "bdev_nvme_attach_controller" 00:10:04.108 } 00:10:04.108 EOF 00:10:04.108 )") 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1255879 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:04.108 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:04.108 { 00:10:04.108 "params": { 00:10:04.108 "name": "Nvme$subsystem", 00:10:04.108 "trtype": "$TEST_TRANSPORT", 00:10:04.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:04.108 "adrfam": "ipv4", 00:10:04.108 "trsvcid": "$NVMF_PORT", 00:10:04.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:04.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:04.108 "hdgst": ${hdgst:-false}, 00:10:04.108 "ddgst": ${ddgst:-false} 00:10:04.108 }, 00:10:04.108 "method": "bdev_nvme_attach_controller" 00:10:04.109 } 00:10:04.109 EOF 00:10:04.109 )") 00:10:04.109 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:04.109 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:04.109 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:04.109 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:04.109 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:04.109 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:04.109 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:04.109 { 00:10:04.109 "params": { 00:10:04.109 "name": "Nvme$subsystem", 00:10:04.109 "trtype": "$TEST_TRANSPORT", 00:10:04.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:04.109 "adrfam": "ipv4", 00:10:04.109 "trsvcid": "$NVMF_PORT", 00:10:04.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:04.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:04.109 "hdgst": ${hdgst:-false}, 00:10:04.109 "ddgst": ${ddgst:-false} 00:10:04.109 }, 00:10:04.109 "method": "bdev_nvme_attach_controller" 00:10:04.109 } 00:10:04.109 EOF 00:10:04.109 )") 00:10:04.109 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:04.109 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1255872 00:10:04.109 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:04.109 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:04.109 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:04.109 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:04.109 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:04.109 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:04.109 "params": { 00:10:04.109 "name": "Nvme1", 00:10:04.109 "trtype": "tcp", 00:10:04.109 "traddr": "10.0.0.2", 00:10:04.109 "adrfam": "ipv4", 00:10:04.109 "trsvcid": "4420", 00:10:04.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:04.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:04.109 "hdgst": false, 00:10:04.109 "ddgst": false 00:10:04.109 }, 00:10:04.109 "method": "bdev_nvme_attach_controller" 00:10:04.109 }' 00:10:04.109 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:04.109 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:04.109 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:04.109 "params": { 00:10:04.109 "name": "Nvme1", 00:10:04.109 "trtype": "tcp", 00:10:04.109 "traddr": "10.0.0.2", 00:10:04.109 "adrfam": "ipv4", 00:10:04.109 "trsvcid": "4420", 00:10:04.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:04.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:04.109 "hdgst": false, 00:10:04.109 "ddgst": false 00:10:04.109 }, 00:10:04.109 "method": "bdev_nvme_attach_controller" 00:10:04.109 }' 00:10:04.109 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:04.109 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:04.109 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:04.109 "params": { 00:10:04.109 "name": "Nvme1", 00:10:04.109 "trtype": "tcp", 00:10:04.109 "traddr": "10.0.0.2", 00:10:04.109 "adrfam": "ipv4", 00:10:04.109 "trsvcid": "4420", 00:10:04.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:04.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:04.109 "hdgst": false, 00:10:04.109 "ddgst": false 00:10:04.109 }, 00:10:04.109 "method": "bdev_nvme_attach_controller" 00:10:04.109 }' 00:10:04.109 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:04.109 "params": { 00:10:04.109 "name": "Nvme1", 00:10:04.109 "trtype": "tcp", 00:10:04.109 "traddr": "10.0.0.2", 00:10:04.109 "adrfam": "ipv4", 00:10:04.109 "trsvcid": "4420", 00:10:04.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:04.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:04.109 "hdgst": false, 00:10:04.109 "ddgst": false 00:10:04.109 }, 00:10:04.109 "method": "bdev_nvme_attach_controller" 00:10:04.109 }' 00:10:04.367 [2024-11-19 10:37:51.752907] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:10:04.367 [2024-11-19 10:37:51.752908] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:10:04.367 [2024-11-19 10:37:51.752909] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:10:04.367 [2024-11-19 10:37:51.752907] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:10:04.367 [2024-11-19 10:37:51.752992] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 10:37:51.752992] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 10:37:51.752992] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 10:37:51.752993] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:04.367 --proc-type=auto ] 00:10:04.367 --proc-type=auto ] 00:10:04.367 --proc-type=auto ] 00:10:04.367 [2024-11-19 10:37:51.934106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.624 [2024-11-19 10:37:51.991012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:04.624 [2024-11-19 10:37:52.040022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.624 [2024-11-19 10:37:52.096639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:04.624 [2024-11-19 10:37:52.142018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.624 [2024-11-19 10:37:52.198295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:04.881 [2024-11-19 10:37:52.247566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.881 [2024-11-19 10:37:52.305371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:04.881 Running I/O for 1 seconds... 00:10:04.881 Running I/O for 1 seconds... 00:10:04.881 Running I/O for 1 seconds... 00:10:05.138 Running I/O for 1 seconds... 00:10:06.070 6502.00 IOPS, 25.40 MiB/s [2024-11-19T09:37:53.693Z] 191520.00 IOPS, 748.12 MiB/s 00:10:06.070 Latency(us) 00:10:06.070 [2024-11-19T09:37:53.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.070 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:06.070 Nvme1n1 : 1.00 191157.73 746.71 0.00 0.00 666.05 292.79 1856.85 00:10:06.070 [2024-11-19T09:37:53.693Z] =================================================================================================================== 00:10:06.070 [2024-11-19T09:37:53.693Z] Total : 191157.73 746.71 0.00 0.00 666.05 292.79 1856.85 00:10:06.070 8838.00 IOPS, 34.52 MiB/s 00:10:06.070 Latency(us) 00:10:06.070 [2024-11-19T09:37:53.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.070 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:06.070 Nvme1n1 : 1.02 6528.64 25.50 0.00 0.00 19428.56 6844.87 29709.65 00:10:06.070 [2024-11-19T09:37:53.693Z] =================================================================================================================== 00:10:06.070 [2024-11-19T09:37:53.693Z] Total : 6528.64 25.50 0.00 0.00 19428.56 6844.87 29709.65 00:10:06.070 00:10:06.070 Latency(us) 00:10:06.070 [2024-11-19T09:37:53.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.070 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:06.071 Nvme1n1 : 1.01 8883.31 34.70 0.00 0.00 14336.99 8155.59 25243.50 00:10:06.071 [2024-11-19T09:37:53.694Z] =================================================================================================================== 00:10:06.071 [2024-11-19T09:37:53.694Z] Total : 8883.31 34.70 0.00 0.00 14336.99 8155.59 25243.50 00:10:06.071 6869.00 IOPS, 26.83 MiB/s 00:10:06.071 Latency(us) 00:10:06.071 [2024-11-19T09:37:53.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.071 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:06.071 Nvme1n1 : 1.01 6974.48 27.24 0.00 0.00 18300.93 3640.89 43690.67 00:10:06.071 [2024-11-19T09:37:53.694Z] =================================================================================================================== 00:10:06.071 [2024-11-19T09:37:53.694Z] Total : 6974.48 27.24 0.00 0.00 18300.93 3640.89 43690.67 00:10:06.071 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1255874 00:10:06.071 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1255876 00:10:06.071 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1255879 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:06.329 rmmod nvme_tcp 00:10:06.329 rmmod nvme_fabrics 00:10:06.329 rmmod nvme_keyring 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1255729 ']' 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1255729 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1255729 ']' 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1255729 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1255729 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1255729' 00:10:06.329 killing process with pid 1255729 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1255729 00:10:06.329 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1255729 00:10:06.587 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:06.587 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:06.587 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:06.587 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:06.587 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:06.587 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:06.587 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:06.587 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:06.587 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:06.587 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.587 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.587 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.490 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:08.753 00:10:08.753 real 0m7.377s 00:10:08.753 user 0m16.435s 00:10:08.753 sys 0m3.545s 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.753 ************************************ 00:10:08.753 END TEST nvmf_bdev_io_wait 00:10:08.753 ************************************ 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:08.753 ************************************ 00:10:08.753 START TEST nvmf_queue_depth 00:10:08.753 ************************************ 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:08.753 * Looking for test storage... 00:10:08.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:08.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.753 --rc genhtml_branch_coverage=1 00:10:08.753 --rc genhtml_function_coverage=1 00:10:08.753 --rc genhtml_legend=1 00:10:08.753 --rc geninfo_all_blocks=1 00:10:08.753 --rc geninfo_unexecuted_blocks=1 00:10:08.753 00:10:08.753 ' 00:10:08.753 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:08.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.754 --rc genhtml_branch_coverage=1 00:10:08.754 --rc genhtml_function_coverage=1 00:10:08.754 --rc genhtml_legend=1 00:10:08.754 --rc geninfo_all_blocks=1 00:10:08.754 --rc geninfo_unexecuted_blocks=1 00:10:08.754 00:10:08.754 ' 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:08.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.754 --rc genhtml_branch_coverage=1 00:10:08.754 --rc genhtml_function_coverage=1 00:10:08.754 --rc genhtml_legend=1 00:10:08.754 --rc geninfo_all_blocks=1 00:10:08.754 --rc geninfo_unexecuted_blocks=1 00:10:08.754 00:10:08.754 ' 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:08.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.754 --rc genhtml_branch_coverage=1 00:10:08.754 --rc genhtml_function_coverage=1 00:10:08.754 --rc genhtml_legend=1 00:10:08.754 --rc geninfo_all_blocks=1 00:10:08.754 --rc geninfo_unexecuted_blocks=1 00:10:08.754 00:10:08.754 ' 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:08.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:08.754 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:11.287 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:11.287 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:11.287 Found net devices under 0000:09:00.0: cvl_0_0 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:11.287 Found net devices under 0000:09:00.1: cvl_0_1 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:11.287 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:11.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:10:11.288 00:10:11.288 --- 10.0.0.2 ping statistics --- 00:10:11.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.288 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:11.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:10:11.288 00:10:11.288 --- 10.0.0.1 ping statistics --- 00:10:11.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.288 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1258112 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1258112 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1258112 ']' 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.288 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.288 [2024-11-19 10:37:58.745484] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:10:11.288 [2024-11-19 10:37:58.745571] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.288 [2024-11-19 10:37:58.818734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.288 [2024-11-19 10:37:58.872953] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.288 [2024-11-19 10:37:58.873003] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.288 [2024-11-19 10:37:58.873031] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.288 [2024-11-19 10:37:58.873048] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.288 [2024-11-19 10:37:58.873058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.288 [2024-11-19 10:37:58.873695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.546 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.546 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:11.546 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:11.546 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:11.546 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.546 [2024-11-19 10:37:59.014813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.546 Malloc0 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.546 [2024-11-19 10:37:59.064067] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1258137 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1258137 /var/tmp/bdevperf.sock 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1258137 ']' 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:11.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.546 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.546 [2024-11-19 10:37:59.111231] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:10:11.546 [2024-11-19 10:37:59.111322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258137 ] 00:10:11.814 [2024-11-19 10:37:59.178524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.814 [2024-11-19 10:37:59.236149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.814 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.814 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:11.814 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:11.814 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.814 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.076 NVMe0n1 00:10:12.076 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.076 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:12.076 Running I/O for 10 seconds... 00:10:14.379 8199.00 IOPS, 32.03 MiB/s [2024-11-19T09:38:02.935Z] 8684.00 IOPS, 33.92 MiB/s [2024-11-19T09:38:03.867Z] 8647.00 IOPS, 33.78 MiB/s [2024-11-19T09:38:04.800Z] 8701.75 IOPS, 33.99 MiB/s [2024-11-19T09:38:05.733Z] 8755.00 IOPS, 34.20 MiB/s [2024-11-19T09:38:06.666Z] 8726.33 IOPS, 34.09 MiB/s [2024-11-19T09:38:08.038Z] 8767.43 IOPS, 34.25 MiB/s [2024-11-19T09:38:08.970Z] 8754.25 IOPS, 34.20 MiB/s [2024-11-19T09:38:09.901Z] 8752.11 IOPS, 34.19 MiB/s [2024-11-19T09:38:09.901Z] 8780.60 IOPS, 34.30 MiB/s 00:10:22.278 Latency(us) 00:10:22.278 [2024-11-19T09:38:09.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.278 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:22.278 Verification LBA range: start 0x0 length 0x4000 00:10:22.278 NVMe0n1 : 10.10 8794.80 34.35 0.00 0.00 115946.66 21262.79 72623.60 00:10:22.278 [2024-11-19T09:38:09.901Z] =================================================================================================================== 00:10:22.278 [2024-11-19T09:38:09.901Z] Total : 8794.80 34.35 0.00 0.00 115946.66 21262.79 72623.60 00:10:22.278 { 00:10:22.278 "results": [ 00:10:22.278 { 00:10:22.278 "job": "NVMe0n1", 00:10:22.278 "core_mask": "0x1", 00:10:22.278 "workload": "verify", 00:10:22.278 "status": "finished", 00:10:22.278 "verify_range": { 00:10:22.278 "start": 0, 00:10:22.278 "length": 16384 00:10:22.278 }, 00:10:22.278 "queue_depth": 1024, 00:10:22.279 "io_size": 4096, 00:10:22.279 "runtime": 10.100281, 00:10:22.279 "iops": 8794.804817806555, 00:10:22.279 "mibps": 34.354706319556854, 00:10:22.279 "io_failed": 0, 00:10:22.279 "io_timeout": 0, 00:10:22.279 "avg_latency_us": 115946.65811700252, 00:10:22.279 "min_latency_us": 21262.79111111111, 00:10:22.279 "max_latency_us": 72623.59703703703 00:10:22.279 } 00:10:22.279 ], 00:10:22.279 "core_count": 1 00:10:22.279 } 00:10:22.279 10:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1258137 00:10:22.279 10:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1258137 ']' 00:10:22.279 10:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1258137 00:10:22.279 10:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:22.279 10:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.279 10:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1258137 00:10:22.279 10:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.279 10:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.279 10:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1258137' 00:10:22.279 killing process with pid 1258137 00:10:22.279 10:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1258137 00:10:22.279 Received shutdown signal, test time was about 10.000000 seconds 00:10:22.279 00:10:22.279 Latency(us) 00:10:22.279 [2024-11-19T09:38:09.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.279 [2024-11-19T09:38:09.902Z] =================================================================================================================== 00:10:22.279 [2024-11-19T09:38:09.902Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:22.279 10:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1258137 00:10:22.537 10:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:22.537 10:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:22.537 10:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:22.537 10:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:22.537 10:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:22.537 10:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:22.537 10:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:22.537 10:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:22.537 rmmod nvme_tcp 00:10:22.537 rmmod nvme_fabrics 00:10:22.537 rmmod nvme_keyring 00:10:22.537 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:22.537 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:22.537 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:22.537 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1258112 ']' 00:10:22.537 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1258112 00:10:22.537 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1258112 ']' 00:10:22.537 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1258112 00:10:22.537 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:22.537 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.537 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1258112 00:10:22.537 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:22.537 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:22.537 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1258112' 00:10:22.537 killing process with pid 1258112 00:10:22.537 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1258112 00:10:22.537 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1258112 00:10:22.795 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:22.795 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:22.795 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:22.795 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:22.795 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:22.795 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:22.795 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:22.795 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:22.795 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:22.795 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.795 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.795 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:25.332 00:10:25.332 real 0m16.218s 00:10:25.332 user 0m22.626s 00:10:25.332 sys 0m3.200s 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:25.332 ************************************ 00:10:25.332 END TEST nvmf_queue_depth 00:10:25.332 ************************************ 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:25.332 ************************************ 00:10:25.332 START TEST nvmf_target_multipath 00:10:25.332 ************************************ 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:25.332 * Looking for test storage... 00:10:25.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:25.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.332 --rc genhtml_branch_coverage=1 00:10:25.332 --rc genhtml_function_coverage=1 00:10:25.332 --rc genhtml_legend=1 00:10:25.332 --rc geninfo_all_blocks=1 00:10:25.332 --rc geninfo_unexecuted_blocks=1 00:10:25.332 00:10:25.332 ' 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:25.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.332 --rc genhtml_branch_coverage=1 00:10:25.332 --rc genhtml_function_coverage=1 00:10:25.332 --rc genhtml_legend=1 00:10:25.332 --rc geninfo_all_blocks=1 00:10:25.332 --rc geninfo_unexecuted_blocks=1 00:10:25.332 00:10:25.332 ' 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:25.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.332 --rc genhtml_branch_coverage=1 00:10:25.332 --rc genhtml_function_coverage=1 00:10:25.332 --rc genhtml_legend=1 00:10:25.332 --rc geninfo_all_blocks=1 00:10:25.332 --rc geninfo_unexecuted_blocks=1 00:10:25.332 00:10:25.332 ' 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:25.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.332 --rc genhtml_branch_coverage=1 00:10:25.332 --rc genhtml_function_coverage=1 00:10:25.332 --rc genhtml_legend=1 00:10:25.332 --rc geninfo_all_blocks=1 00:10:25.332 --rc geninfo_unexecuted_blocks=1 00:10:25.332 00:10:25.332 ' 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.332 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:25.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:25.333 10:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:27.236 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:27.236 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:27.236 Found net devices under 0000:09:00.0: cvl_0_0 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:27.236 Found net devices under 0000:09:00.1: cvl_0_1 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:27.236 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:27.237 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:27.237 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:27.237 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:27.237 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.237 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:27.237 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:27.237 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:27.237 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:27.495 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:27.495 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:27.495 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:27.495 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:27.495 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:27.495 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:27.495 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:27.496 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:27.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:27.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:10:27.496 00:10:27.496 --- 10.0.0.2 ping statistics --- 00:10:27.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.496 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:10:27.496 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:27.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:27.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:10:27.496 00:10:27.496 --- 10.0.0.1 ping statistics --- 00:10:27.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.496 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:10:27.496 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:27.496 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:27.496 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:27.496 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:27.496 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:27.496 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:27.496 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:27.496 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:27.496 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:27.496 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:27.496 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:27.496 only one NIC for nvmf test 00:10:27.496 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:27.496 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:27.496 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:27.496 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:27.496 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:27.496 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:27.496 10:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:27.496 rmmod nvme_tcp 00:10:27.496 rmmod nvme_fabrics 00:10:27.496 rmmod nvme_keyring 00:10:27.496 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:27.496 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:27.496 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:27.496 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:27.496 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:27.496 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:27.496 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:27.496 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:27.496 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:27.496 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:27.496 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:27.496 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:27.496 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:27.496 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.496 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.496 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:30.080 00:10:30.080 real 0m4.673s 00:10:30.080 user 0m0.948s 00:10:30.080 sys 0m1.740s 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:30.080 ************************************ 00:10:30.080 END TEST nvmf_target_multipath 00:10:30.080 ************************************ 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:30.080 ************************************ 00:10:30.080 START TEST nvmf_zcopy 00:10:30.080 ************************************ 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:30.080 * Looking for test storage... 00:10:30.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.080 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:30.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.081 --rc genhtml_branch_coverage=1 00:10:30.081 --rc genhtml_function_coverage=1 00:10:30.081 --rc genhtml_legend=1 00:10:30.081 --rc geninfo_all_blocks=1 00:10:30.081 --rc geninfo_unexecuted_blocks=1 00:10:30.081 00:10:30.081 ' 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:30.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.081 --rc genhtml_branch_coverage=1 00:10:30.081 --rc genhtml_function_coverage=1 00:10:30.081 --rc genhtml_legend=1 00:10:30.081 --rc geninfo_all_blocks=1 00:10:30.081 --rc geninfo_unexecuted_blocks=1 00:10:30.081 00:10:30.081 ' 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:30.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.081 --rc genhtml_branch_coverage=1 00:10:30.081 --rc genhtml_function_coverage=1 00:10:30.081 --rc genhtml_legend=1 00:10:30.081 --rc geninfo_all_blocks=1 00:10:30.081 --rc geninfo_unexecuted_blocks=1 00:10:30.081 00:10:30.081 ' 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:30.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.081 --rc genhtml_branch_coverage=1 00:10:30.081 --rc genhtml_function_coverage=1 00:10:30.081 --rc genhtml_legend=1 00:10:30.081 --rc geninfo_all_blocks=1 00:10:30.081 --rc geninfo_unexecuted_blocks=1 00:10:30.081 00:10:30.081 ' 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:30.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.081 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.082 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.082 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:30.082 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:30.082 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:30.082 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:31.984 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:31.984 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:31.984 Found net devices under 0000:09:00.0: cvl_0_0 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:31.984 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.985 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.985 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:31.985 Found net devices under 0000:09:00.1: cvl_0_1 00:10:31.985 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.985 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:31.985 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:31.985 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:31.985 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:31.985 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:31.985 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:31.985 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:31.985 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:31.985 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:31.985 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:31.985 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:31.985 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:31.985 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:31.985 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:31.985 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:31.985 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:31.985 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:31.985 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:31.985 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:31.985 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:32.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:10:32.243 00:10:32.243 --- 10.0.0.2 ping statistics --- 00:10:32.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.243 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:32.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:10:32.243 00:10:32.243 --- 10.0.0.1 ping statistics --- 00:10:32.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.243 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1263354 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1263354 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1263354 ']' 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:32.243 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:32.243 [2024-11-19 10:38:19.756429] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:10:32.243 [2024-11-19 10:38:19.756525] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.243 [2024-11-19 10:38:19.827355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.501 [2024-11-19 10:38:19.882147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.501 [2024-11-19 10:38:19.882195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.501 [2024-11-19 10:38:19.882222] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.501 [2024-11-19 10:38:19.882232] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.501 [2024-11-19 10:38:19.882242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.501 [2024-11-19 10:38:19.882883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.501 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:32.501 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:32.501 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:32.501 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:32.501 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:32.501 [2024-11-19 10:38:20.023484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:32.501 [2024-11-19 10:38:20.039648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:32.501 malloc0 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:32.501 { 00:10:32.501 "params": { 00:10:32.501 "name": "Nvme$subsystem", 00:10:32.501 "trtype": "$TEST_TRANSPORT", 00:10:32.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:32.501 "adrfam": "ipv4", 00:10:32.501 "trsvcid": "$NVMF_PORT", 00:10:32.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:32.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:32.501 "hdgst": ${hdgst:-false}, 00:10:32.501 "ddgst": ${ddgst:-false} 00:10:32.501 }, 00:10:32.501 "method": "bdev_nvme_attach_controller" 00:10:32.501 } 00:10:32.501 EOF 00:10:32.501 )") 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:32.501 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:32.501 "params": { 00:10:32.501 "name": "Nvme1", 00:10:32.501 "trtype": "tcp", 00:10:32.501 "traddr": "10.0.0.2", 00:10:32.501 "adrfam": "ipv4", 00:10:32.501 "trsvcid": "4420", 00:10:32.501 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:32.501 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:32.501 "hdgst": false, 00:10:32.501 "ddgst": false 00:10:32.501 }, 00:10:32.501 "method": "bdev_nvme_attach_controller" 00:10:32.501 }' 00:10:32.758 [2024-11-19 10:38:20.123037] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:10:32.759 [2024-11-19 10:38:20.123111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263415 ] 00:10:32.759 [2024-11-19 10:38:20.192222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.759 [2024-11-19 10:38:20.253627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.017 Running I/O for 10 seconds... 00:10:35.324 5801.00 IOPS, 45.32 MiB/s [2024-11-19T09:38:23.880Z] 5831.00 IOPS, 45.55 MiB/s [2024-11-19T09:38:24.812Z] 5842.67 IOPS, 45.65 MiB/s [2024-11-19T09:38:25.744Z] 5848.75 IOPS, 45.69 MiB/s [2024-11-19T09:38:26.675Z] 5864.20 IOPS, 45.81 MiB/s [2024-11-19T09:38:28.047Z] 5863.67 IOPS, 45.81 MiB/s [2024-11-19T09:38:28.980Z] 5864.29 IOPS, 45.81 MiB/s [2024-11-19T09:38:29.912Z] 5864.50 IOPS, 45.82 MiB/s [2024-11-19T09:38:30.843Z] 5864.33 IOPS, 45.82 MiB/s [2024-11-19T09:38:30.843Z] 5864.40 IOPS, 45.82 MiB/s 00:10:43.220 Latency(us) 00:10:43.220 [2024-11-19T09:38:30.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:43.220 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:43.220 Verification LBA range: start 0x0 length 0x1000 00:10:43.220 Nvme1n1 : 10.01 5870.34 45.86 0.00 0.00 21746.12 321.61 32039.82 00:10:43.220 [2024-11-19T09:38:30.843Z] =================================================================================================================== 00:10:43.220 [2024-11-19T09:38:30.843Z] Total : 5870.34 45.86 0.00 0.00 21746.12 321.61 32039.82 00:10:43.479 10:38:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1264692 00:10:43.479 10:38:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:43.479 10:38:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.479 10:38:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:43.479 10:38:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:43.479 10:38:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:43.479 10:38:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:43.479 10:38:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:43.479 10:38:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:43.479 { 00:10:43.479 "params": { 00:10:43.479 "name": "Nvme$subsystem", 00:10:43.479 "trtype": "$TEST_TRANSPORT", 00:10:43.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:43.479 "adrfam": "ipv4", 00:10:43.479 "trsvcid": "$NVMF_PORT", 00:10:43.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:43.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:43.479 "hdgst": ${hdgst:-false}, 00:10:43.479 "ddgst": ${ddgst:-false} 00:10:43.479 }, 00:10:43.479 "method": "bdev_nvme_attach_controller" 00:10:43.479 } 00:10:43.479 EOF 00:10:43.479 )") 00:10:43.479 [2024-11-19 10:38:30.868759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:30.868800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 10:38:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:43.479 10:38:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:43.479 10:38:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:43.479 10:38:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:43.479 "params": { 00:10:43.479 "name": "Nvme1", 00:10:43.479 "trtype": "tcp", 00:10:43.479 "traddr": "10.0.0.2", 00:10:43.479 "adrfam": "ipv4", 00:10:43.479 "trsvcid": "4420", 00:10:43.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:43.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:43.479 "hdgst": false, 00:10:43.479 "ddgst": false 00:10:43.479 }, 00:10:43.479 "method": "bdev_nvme_attach_controller" 00:10:43.479 }' 00:10:43.479 [2024-11-19 10:38:30.876717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:30.876738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:30.884729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:30.884749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:30.892748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:30.892767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:30.900768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:30.900787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:30.908789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:30.908807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:30.914434] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:10:43.479 [2024-11-19 10:38:30.914509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264692 ] 00:10:43.479 [2024-11-19 10:38:30.916809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:30.916829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:30.924833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:30.924852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:30.932856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:30.932876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:30.940875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:30.940894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:30.948899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:30.948926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:30.956920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:30.956939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:30.964941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:30.964961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:30.972962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:30.972982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:30.980985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:30.981005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:30.982673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.479 [2024-11-19 10:38:30.989029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:30.989056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:30.997063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:30.997096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:31.005052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:31.005073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:31.013071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:31.013092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:31.021115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:31.021135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:31.029115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:31.029135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:31.037134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:31.037154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:31.041958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.479 [2024-11-19 10:38:31.045156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:31.045176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:31.053182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:31.053204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:31.061244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:31.061278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:31.069258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:31.069314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:31.077313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:31.077365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:31.085331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:31.085380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.479 [2024-11-19 10:38:31.093350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.479 [2024-11-19 10:38:31.093403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.738 [2024-11-19 10:38:31.101383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.738 [2024-11-19 10:38:31.101417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.738 [2024-11-19 10:38:31.109400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.738 [2024-11-19 10:38:31.109430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.738 [2024-11-19 10:38:31.117386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.738 [2024-11-19 10:38:31.117409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.738 [2024-11-19 10:38:31.125431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.738 [2024-11-19 10:38:31.125465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.738 [2024-11-19 10:38:31.133460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.738 [2024-11-19 10:38:31.133499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.738 [2024-11-19 10:38:31.141454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.738 [2024-11-19 10:38:31.141480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.738 [2024-11-19 10:38:31.149456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.738 [2024-11-19 10:38:31.149478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.738 [2024-11-19 10:38:31.157484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.738 [2024-11-19 10:38:31.157508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.738 [2024-11-19 10:38:31.165512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.738 [2024-11-19 10:38:31.165539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.738 [2024-11-19 10:38:31.173529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.738 [2024-11-19 10:38:31.173553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.738 [2024-11-19 10:38:31.181548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.738 [2024-11-19 10:38:31.181572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.738 [2024-11-19 10:38:31.189592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.738 [2024-11-19 10:38:31.189617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.738 [2024-11-19 10:38:31.197608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.738 [2024-11-19 10:38:31.197631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.738 [2024-11-19 10:38:31.205626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.738 [2024-11-19 10:38:31.205662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.738 [2024-11-19 10:38:31.213660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.738 [2024-11-19 10:38:31.213680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.738 [2024-11-19 10:38:31.221667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.738 [2024-11-19 10:38:31.221688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.738 [2024-11-19 10:38:31.229695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.738 [2024-11-19 10:38:31.229717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.738 [2024-11-19 10:38:31.237702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.738 [2024-11-19 10:38:31.237724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.739 [2024-11-19 10:38:31.245724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.739 [2024-11-19 10:38:31.245753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.739 [2024-11-19 10:38:31.253753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.739 [2024-11-19 10:38:31.253775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.739 [2024-11-19 10:38:31.261773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.739 [2024-11-19 10:38:31.261793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.739 [2024-11-19 10:38:31.269798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.739 [2024-11-19 10:38:31.269818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.739 [2024-11-19 10:38:31.277821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.739 [2024-11-19 10:38:31.277842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.739 [2024-11-19 10:38:31.285848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.739 [2024-11-19 10:38:31.285871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.739 [2024-11-19 10:38:31.293864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.739 [2024-11-19 10:38:31.293884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.739 [2024-11-19 10:38:31.301885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.739 [2024-11-19 10:38:31.301906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.739 [2024-11-19 10:38:31.309908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.739 [2024-11-19 10:38:31.309927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.739 [2024-11-19 10:38:31.317934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.739 [2024-11-19 10:38:31.317954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.739 [2024-11-19 10:38:31.325956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.739 [2024-11-19 10:38:31.325977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.739 [2024-11-19 10:38:31.333978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.739 [2024-11-19 10:38:31.333999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.739 [2024-11-19 10:38:31.342000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.739 [2024-11-19 10:38:31.342020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.739 [2024-11-19 10:38:31.350022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.739 [2024-11-19 10:38:31.350042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.739 [2024-11-19 10:38:31.358084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.739 [2024-11-19 10:38:31.358104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.996 [2024-11-19 10:38:31.366110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.996 [2024-11-19 10:38:31.366130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.996 [2024-11-19 10:38:31.374114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.996 [2024-11-19 10:38:31.374134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.996 [2024-11-19 10:38:31.382160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.996 [2024-11-19 10:38:31.382185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.996 [2024-11-19 10:38:31.390161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.996 [2024-11-19 10:38:31.390182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.996 Running I/O for 5 seconds... 00:10:43.996 [2024-11-19 10:38:31.398181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.996 [2024-11-19 10:38:31.398202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.996 [2024-11-19 10:38:31.412750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.996 [2024-11-19 10:38:31.412796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.996 [2024-11-19 10:38:31.423331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.996 [2024-11-19 10:38:31.423359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.996 [2024-11-19 10:38:31.434163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.996 [2024-11-19 10:38:31.434192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.996 [2024-11-19 10:38:31.448149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.996 [2024-11-19 10:38:31.448178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.996 [2024-11-19 10:38:31.458739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.996 [2024-11-19 10:38:31.458782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.996 [2024-11-19 10:38:31.469930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.996 [2024-11-19 10:38:31.469960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.996 [2024-11-19 10:38:31.480721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.996 [2024-11-19 10:38:31.480764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.996 [2024-11-19 10:38:31.491836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.996 [2024-11-19 10:38:31.491863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.996 [2024-11-19 10:38:31.504610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.996 [2024-11-19 10:38:31.504638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.997 [2024-11-19 10:38:31.515051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.997 [2024-11-19 10:38:31.515078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.997 [2024-11-19 10:38:31.526105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.997 [2024-11-19 10:38:31.526134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.997 [2024-11-19 10:38:31.539584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.997 [2024-11-19 10:38:31.539627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.997 [2024-11-19 10:38:31.549787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.997 [2024-11-19 10:38:31.549830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.997 [2024-11-19 10:38:31.560641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.997 [2024-11-19 10:38:31.560669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.997 [2024-11-19 10:38:31.571226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.997 [2024-11-19 10:38:31.571253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.997 [2024-11-19 10:38:31.581972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.997 [2024-11-19 10:38:31.582000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.997 [2024-11-19 10:38:31.595124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.997 [2024-11-19 10:38:31.595151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.997 [2024-11-19 10:38:31.605215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.997 [2024-11-19 10:38:31.605242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.997 [2024-11-19 10:38:31.615461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.997 [2024-11-19 10:38:31.615489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.254 [2024-11-19 10:38:31.626066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.254 [2024-11-19 10:38:31.626095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.254 [2024-11-19 10:38:31.638460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.254 [2024-11-19 10:38:31.638493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.254 [2024-11-19 10:38:31.648090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.254 [2024-11-19 10:38:31.648117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.254 [2024-11-19 10:38:31.658957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.254 [2024-11-19 10:38:31.658984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.254 [2024-11-19 10:38:31.670075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.254 [2024-11-19 10:38:31.670102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.254 [2024-11-19 10:38:31.682651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.254 [2024-11-19 10:38:31.682678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.254 [2024-11-19 10:38:31.691874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.254 [2024-11-19 10:38:31.691901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.254 [2024-11-19 10:38:31.703424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.254 [2024-11-19 10:38:31.703452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.254 [2024-11-19 10:38:31.713665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.254 [2024-11-19 10:38:31.713693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-19 10:38:31.724847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-19 10:38:31.724875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-19 10:38:31.735539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-19 10:38:31.735567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-19 10:38:31.746481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-19 10:38:31.746509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-19 10:38:31.757666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-19 10:38:31.757693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-19 10:38:31.768912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-19 10:38:31.768939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-19 10:38:31.779681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-19 10:38:31.779723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-19 10:38:31.790282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-19 10:38:31.790317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-19 10:38:31.801092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-19 10:38:31.801119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-19 10:38:31.811268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-19 10:38:31.811327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-19 10:38:31.821946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-19 10:38:31.821973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-19 10:38:31.832654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-19 10:38:31.832681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-19 10:38:31.843363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-19 10:38:31.843391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-19 10:38:31.853843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-19 10:38:31.853871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-19 10:38:31.864337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-19 10:38:31.864365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-19 10:38:31.874994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-19 10:38:31.875021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.513 [2024-11-19 10:38:31.885626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.513 [2024-11-19 10:38:31.885654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.513 [2024-11-19 10:38:31.896037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.513 [2024-11-19 10:38:31.896064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.513 [2024-11-19 10:38:31.906796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.513 [2024-11-19 10:38:31.906838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.513 [2024-11-19 10:38:31.917645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.513 [2024-11-19 10:38:31.917672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.513 [2024-11-19 10:38:31.928737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.513 [2024-11-19 10:38:31.928764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.513 [2024-11-19 10:38:31.939324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.513 [2024-11-19 10:38:31.939353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.513 [2024-11-19 10:38:31.949958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.513 [2024-11-19 10:38:31.949986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.513 [2024-11-19 10:38:31.960369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.513 [2024-11-19 10:38:31.960397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.513 [2024-11-19 10:38:31.970772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.513 [2024-11-19 10:38:31.970799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.513 [2024-11-19 10:38:31.981289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.513 [2024-11-19 10:38:31.981325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.513 [2024-11-19 10:38:31.992188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.513 [2024-11-19 10:38:31.992215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.513 [2024-11-19 10:38:32.004627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.513 [2024-11-19 10:38:32.004654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.513 [2024-11-19 10:38:32.014948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.513 [2024-11-19 10:38:32.014983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.513 [2024-11-19 10:38:32.025833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.513 [2024-11-19 10:38:32.025860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.513 [2024-11-19 10:38:32.039391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.513 [2024-11-19 10:38:32.039419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.513 [2024-11-19 10:38:32.049906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.513 [2024-11-19 10:38:32.049933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.513 [2024-11-19 10:38:32.060421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.513 [2024-11-19 10:38:32.060474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.513 [2024-11-19 10:38:32.071460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.513 [2024-11-19 10:38:32.071488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.513 [2024-11-19 10:38:32.082441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.513 [2024-11-19 10:38:32.082468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.513 [2024-11-19 10:38:32.092869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.513 [2024-11-19 10:38:32.092897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.513 [2024-11-19 10:38:32.103969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.513 [2024-11-19 10:38:32.103997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.513 [2024-11-19 10:38:32.114239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.513 [2024-11-19 10:38:32.114267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.513 [2024-11-19 10:38:32.124878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.513 [2024-11-19 10:38:32.124906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.135954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.135982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.146816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.146858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.159419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.159447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.169320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.169360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.180095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.180123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.192632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.192660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.202620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.202648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.213583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.213610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.224336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.224378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.235452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.235479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.248733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.248775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.259030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.259057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.269840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.269868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.280828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.280870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.291475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.291502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.302124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.302153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.312666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.312695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.325448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.325477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.335104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.335131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.345899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.345926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.358270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.358297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.367773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.367801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.379267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.379295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.771 [2024-11-19 10:38:32.392321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.771 [2024-11-19 10:38:32.392348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.029 11697.00 IOPS, 91.38 MiB/s [2024-11-19T09:38:32.652Z] [2024-11-19 10:38:32.402782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.029 [2024-11-19 10:38:32.402809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.029 [2024-11-19 10:38:32.413368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.029 [2024-11-19 10:38:32.413395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.029 [2024-11-19 10:38:32.424005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.029 [2024-11-19 10:38:32.424032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.029 [2024-11-19 10:38:32.434365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.029 [2024-11-19 10:38:32.434393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.029 [2024-11-19 10:38:32.444923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.029 [2024-11-19 10:38:32.444951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.029 [2024-11-19 10:38:32.455613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.029 [2024-11-19 10:38:32.455641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.029 [2024-11-19 10:38:32.469100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.029 [2024-11-19 10:38:32.469127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.029 [2024-11-19 10:38:32.479662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.029 [2024-11-19 10:38:32.479690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.029 [2024-11-19 10:38:32.489887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.029 [2024-11-19 10:38:32.489914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.029 [2024-11-19 10:38:32.500221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.029 [2024-11-19 10:38:32.500247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.029 [2024-11-19 10:38:32.510560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.029 [2024-11-19 10:38:32.510586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.029 [2024-11-19 10:38:32.521096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.029 [2024-11-19 10:38:32.521124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.029 [2024-11-19 10:38:32.531769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.029 [2024-11-19 10:38:32.531796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.029 [2024-11-19 10:38:32.542779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.029 [2024-11-19 10:38:32.542821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.029 [2024-11-19 10:38:32.553544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.029 [2024-11-19 10:38:32.553576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.029 [2024-11-19 10:38:32.564468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.029 [2024-11-19 10:38:32.564496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.029 [2024-11-19 10:38:32.577999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.029 [2024-11-19 10:38:32.578028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.029 [2024-11-19 10:38:32.588322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.029 [2024-11-19 10:38:32.588350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.029 [2024-11-19 10:38:32.599036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.029 [2024-11-19 10:38:32.599064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.029 [2024-11-19 10:38:32.612113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.029 [2024-11-19 10:38:32.612141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.029 [2024-11-19 10:38:32.622404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.029 [2024-11-19 10:38:32.622432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.029 [2024-11-19 10:38:32.633294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.029 [2024-11-19 10:38:32.633334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.029 [2024-11-19 10:38:32.645839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.029 [2024-11-19 10:38:32.645866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.657714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.657743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.666803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.666830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.678373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.678401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.690834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.690877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.701056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.701085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.711704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.711746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.722485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.722512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.733097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.733124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.743942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.743969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.754495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.754523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.766999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.767027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.776902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.776945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.787561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.787589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.798257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.798284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.808717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.808745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.819237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.819264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.830260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.830288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.841257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.841284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.852263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.852291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.864868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.864896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.875277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.875312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.885745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.885772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.896257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.896285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.287 [2024-11-19 10:38:32.906948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.287 [2024-11-19 10:38:32.906976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.545 [2024-11-19 10:38:32.917366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.545 [2024-11-19 10:38:32.917394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.545 [2024-11-19 10:38:32.927897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.545 [2024-11-19 10:38:32.927924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.545 [2024-11-19 10:38:32.938087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.545 [2024-11-19 10:38:32.938114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.545 [2024-11-19 10:38:32.948812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.545 [2024-11-19 10:38:32.948839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.545 [2024-11-19 10:38:32.961149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.545 [2024-11-19 10:38:32.961176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.545 [2024-11-19 10:38:32.970365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.545 [2024-11-19 10:38:32.970392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.545 [2024-11-19 10:38:32.981403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.545 [2024-11-19 10:38:32.981430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.545 [2024-11-19 10:38:32.991622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.545 [2024-11-19 10:38:32.991666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.545 [2024-11-19 10:38:33.002009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.545 [2024-11-19 10:38:33.002036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.545 [2024-11-19 10:38:33.012812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.545 [2024-11-19 10:38:33.012839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.545 [2024-11-19 10:38:33.025398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.545 [2024-11-19 10:38:33.025426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.545 [2024-11-19 10:38:33.035402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.545 [2024-11-19 10:38:33.035429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.545 [2024-11-19 10:38:33.046257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.545 [2024-11-19 10:38:33.046285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.545 [2024-11-19 10:38:33.057580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.545 [2024-11-19 10:38:33.057607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.545 [2024-11-19 10:38:33.068285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.545 [2024-11-19 10:38:33.068325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.545 [2024-11-19 10:38:33.081481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.545 [2024-11-19 10:38:33.081508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.545 [2024-11-19 10:38:33.091698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.545 [2024-11-19 10:38:33.091725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.545 [2024-11-19 10:38:33.102061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.545 [2024-11-19 10:38:33.102089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.545 [2024-11-19 10:38:33.112805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.545 [2024-11-19 10:38:33.112833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.545 [2024-11-19 10:38:33.123429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.545 [2024-11-19 10:38:33.123456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.545 [2024-11-19 10:38:33.135641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.545 [2024-11-19 10:38:33.135669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.545 [2024-11-19 10:38:33.144249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.545 [2024-11-19 10:38:33.144275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.545 [2024-11-19 10:38:33.155687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.545 [2024-11-19 10:38:33.155714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.802 [2024-11-19 10:38:33.167900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.802 [2024-11-19 10:38:33.167927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.802 [2024-11-19 10:38:33.177569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.802 [2024-11-19 10:38:33.177597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.802 [2024-11-19 10:38:33.188502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.802 [2024-11-19 10:38:33.188531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.802 [2024-11-19 10:38:33.199180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.802 [2024-11-19 10:38:33.199208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.802 [2024-11-19 10:38:33.209726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.802 [2024-11-19 10:38:33.209753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.802 [2024-11-19 10:38:33.222331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.802 [2024-11-19 10:38:33.222360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.802 [2024-11-19 10:38:33.232642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.802 [2024-11-19 10:38:33.232669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.802 [2024-11-19 10:38:33.243490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.802 [2024-11-19 10:38:33.243533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.802 [2024-11-19 10:38:33.255833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.802 [2024-11-19 10:38:33.255883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.802 [2024-11-19 10:38:33.265918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.802 [2024-11-19 10:38:33.265946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.802 [2024-11-19 10:38:33.276383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.802 [2024-11-19 10:38:33.276411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.802 [2024-11-19 10:38:33.286930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.802 [2024-11-19 10:38:33.286957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.802 [2024-11-19 10:38:33.297824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.803 [2024-11-19 10:38:33.297851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.803 [2024-11-19 10:38:33.310402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.803 [2024-11-19 10:38:33.310429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.803 [2024-11-19 10:38:33.320834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.803 [2024-11-19 10:38:33.320862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.803 [2024-11-19 10:38:33.331651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.803 [2024-11-19 10:38:33.331693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.803 [2024-11-19 10:38:33.344327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.803 [2024-11-19 10:38:33.344370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.803 [2024-11-19 10:38:33.354439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.803 [2024-11-19 10:38:33.354466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.803 [2024-11-19 10:38:33.365002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.803 [2024-11-19 10:38:33.365029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.803 [2024-11-19 10:38:33.375462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.803 [2024-11-19 10:38:33.375490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.803 [2024-11-19 10:38:33.386075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.803 [2024-11-19 10:38:33.386104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.803 [2024-11-19 10:38:33.396803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.803 [2024-11-19 10:38:33.396830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.803 11838.50 IOPS, 92.49 MiB/s [2024-11-19T09:38:33.426Z] [2024-11-19 10:38:33.407112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.803 [2024-11-19 10:38:33.407140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.803 [2024-11-19 10:38:33.417543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.803 [2024-11-19 10:38:33.417570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.428101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.428129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.440750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.440778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.451956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.451984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.460608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.460643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.472325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.472353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.482849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.482877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.493181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.493208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.503819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.503846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.514221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.514248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.524817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.524844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.536997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.537039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.546784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.546811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.557454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.557482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.567990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.568017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.579017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.579044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.592422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.592450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.602769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.602796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.613143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.613170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.623810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.623838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.634594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.634622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.646892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.646919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.656443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.656470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.669097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.669130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.061 [2024-11-19 10:38:33.679376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.061 [2024-11-19 10:38:33.679404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.319 [2024-11-19 10:38:33.689820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.319 [2024-11-19 10:38:33.689848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.319 [2024-11-19 10:38:33.700235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.319 [2024-11-19 10:38:33.700262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.319 [2024-11-19 10:38:33.711136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.319 [2024-11-19 10:38:33.711164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.319 [2024-11-19 10:38:33.723666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.319 [2024-11-19 10:38:33.723694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.319 [2024-11-19 10:38:33.733604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.319 [2024-11-19 10:38:33.733632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.319 [2024-11-19 10:38:33.744683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.319 [2024-11-19 10:38:33.744711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.319 [2024-11-19 10:38:33.758472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.319 [2024-11-19 10:38:33.758500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.319 [2024-11-19 10:38:33.770257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.319 [2024-11-19 10:38:33.770284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.319 [2024-11-19 10:38:33.779361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.319 [2024-11-19 10:38:33.779388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.319 [2024-11-19 10:38:33.790912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.319 [2024-11-19 10:38:33.790940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.319 [2024-11-19 10:38:33.803649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.319 [2024-11-19 10:38:33.803694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.319 [2024-11-19 10:38:33.813949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.319 [2024-11-19 10:38:33.813981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.319 [2024-11-19 10:38:33.824351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.319 [2024-11-19 10:38:33.824379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.319 [2024-11-19 10:38:33.835226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.319 [2024-11-19 10:38:33.835254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.319 [2024-11-19 10:38:33.847282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.319 [2024-11-19 10:38:33.847319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.319 [2024-11-19 10:38:33.856985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.319 [2024-11-19 10:38:33.857012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.319 [2024-11-19 10:38:33.868322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.319 [2024-11-19 10:38:33.868350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.319 [2024-11-19 10:38:33.879125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.319 [2024-11-19 10:38:33.879152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.319 [2024-11-19 10:38:33.889969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.319 [2024-11-19 10:38:33.889997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.319 [2024-11-19 10:38:33.902846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.319 [2024-11-19 10:38:33.902874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.319 [2024-11-19 10:38:33.913287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.319 [2024-11-19 10:38:33.913325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.319 [2024-11-19 10:38:33.923956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.319 [2024-11-19 10:38:33.923983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.319 [2024-11-19 10:38:33.934490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.319 [2024-11-19 10:38:33.934518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.577 [2024-11-19 10:38:33.944988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.577 [2024-11-19 10:38:33.945014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.577 [2024-11-19 10:38:33.955760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.577 [2024-11-19 10:38:33.955787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.577 [2024-11-19 10:38:33.966449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.577 [2024-11-19 10:38:33.966476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.577 [2024-11-19 10:38:33.979103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.577 [2024-11-19 10:38:33.979131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.577 [2024-11-19 10:38:33.990809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.577 [2024-11-19 10:38:33.990851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.577 [2024-11-19 10:38:34.000139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.577 [2024-11-19 10:38:34.000166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.577 [2024-11-19 10:38:34.011595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.577 [2024-11-19 10:38:34.011622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.577 [2024-11-19 10:38:34.022415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.577 [2024-11-19 10:38:34.022457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.577 [2024-11-19 10:38:34.033423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.577 [2024-11-19 10:38:34.033450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.577 [2024-11-19 10:38:34.046071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.577 [2024-11-19 10:38:34.046099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.577 [2024-11-19 10:38:34.055778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.577 [2024-11-19 10:38:34.055805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.577 [2024-11-19 10:38:34.066382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.577 [2024-11-19 10:38:34.066410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.577 [2024-11-19 10:38:34.077176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.577 [2024-11-19 10:38:34.077204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.577 [2024-11-19 10:38:34.089844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.577 [2024-11-19 10:38:34.089871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.577 [2024-11-19 10:38:34.101490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.577 [2024-11-19 10:38:34.101517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.577 [2024-11-19 10:38:34.110710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.577 [2024-11-19 10:38:34.110737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.577 [2024-11-19 10:38:34.121833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.577 [2024-11-19 10:38:34.121860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.577 [2024-11-19 10:38:34.132120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.577 [2024-11-19 10:38:34.132147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.577 [2024-11-19 10:38:34.142376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.577 [2024-11-19 10:38:34.142403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.577 [2024-11-19 10:38:34.153459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.577 [2024-11-19 10:38:34.153486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.577 [2024-11-19 10:38:34.166868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.577 [2024-11-19 10:38:34.166896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.578 [2024-11-19 10:38:34.177378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.578 [2024-11-19 10:38:34.177405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.578 [2024-11-19 10:38:34.188229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.578 [2024-11-19 10:38:34.188257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.835 [2024-11-19 10:38:34.198882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.835 [2024-11-19 10:38:34.198920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.835 [2024-11-19 10:38:34.209345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.835 [2024-11-19 10:38:34.209384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.835 [2024-11-19 10:38:34.219860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.835 [2024-11-19 10:38:34.219887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.835 [2024-11-19 10:38:34.230745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.835 [2024-11-19 10:38:34.230787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.835 [2024-11-19 10:38:34.241400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.835 [2024-11-19 10:38:34.241428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.836 [2024-11-19 10:38:34.251730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.836 [2024-11-19 10:38:34.251757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.836 [2024-11-19 10:38:34.262609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.836 [2024-11-19 10:38:34.262635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.836 [2024-11-19 10:38:34.276218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.836 [2024-11-19 10:38:34.276245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.836 [2024-11-19 10:38:34.286568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.836 [2024-11-19 10:38:34.286595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.836 [2024-11-19 10:38:34.297676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.836 [2024-11-19 10:38:34.297704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.836 [2024-11-19 10:38:34.310468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.836 [2024-11-19 10:38:34.310495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.836 [2024-11-19 10:38:34.320553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.836 [2024-11-19 10:38:34.320580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.836 [2024-11-19 10:38:34.331326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.836 [2024-11-19 10:38:34.331354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.836 [2024-11-19 10:38:34.341795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.836 [2024-11-19 10:38:34.341822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.836 [2024-11-19 10:38:34.352337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.836 [2024-11-19 10:38:34.352379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.836 [2024-11-19 10:38:34.362719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.836 [2024-11-19 10:38:34.362746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.836 [2024-11-19 10:38:34.373066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.836 [2024-11-19 10:38:34.373093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.836 [2024-11-19 10:38:34.383760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.836 [2024-11-19 10:38:34.383787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.836 [2024-11-19 10:38:34.394322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.836 [2024-11-19 10:38:34.394349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.836 11879.33 IOPS, 92.81 MiB/s [2024-11-19T09:38:34.459Z] [2024-11-19 10:38:34.405459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.836 [2024-11-19 10:38:34.405486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.836 [2024-11-19 10:38:34.415962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.836 [2024-11-19 10:38:34.415989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.836 [2024-11-19 10:38:34.426487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.836 [2024-11-19 10:38:34.426514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.836 [2024-11-19 10:38:34.437261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.836 [2024-11-19 10:38:34.437288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.836 [2024-11-19 10:38:34.449673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.836 [2024-11-19 10:38:34.449700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.459629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.459656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.470101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.470130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.480373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.480400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.491556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.491592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.502488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.502515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.513005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.513032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.523576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.523603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.533989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.534017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.544479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.544507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.555268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.555295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.565350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.565377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.575477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.575504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.585854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.585881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.596018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.596046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.606917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.606944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.619511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.619553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.629319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.629357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.640046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.640074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.652702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.652729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.663021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.663048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.673496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.673524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.684151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.684178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.695357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.695411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.094 [2024-11-19 10:38:34.705954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.094 [2024-11-19 10:38:34.705981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.352 [2024-11-19 10:38:34.716604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.352 [2024-11-19 10:38:34.716631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.352 [2024-11-19 10:38:34.727233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.352 [2024-11-19 10:38:34.727260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.352 [2024-11-19 10:38:34.738167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.352 [2024-11-19 10:38:34.738194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.352 [2024-11-19 10:38:34.748697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.352 [2024-11-19 10:38:34.748724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.352 [2024-11-19 10:38:34.760085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.352 [2024-11-19 10:38:34.760113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.353 [2024-11-19 10:38:34.770560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.353 [2024-11-19 10:38:34.770587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.353 [2024-11-19 10:38:34.781606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.353 [2024-11-19 10:38:34.781633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.353 [2024-11-19 10:38:34.794343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.353 [2024-11-19 10:38:34.794371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.353 [2024-11-19 10:38:34.804577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.353 [2024-11-19 10:38:34.804605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.353 [2024-11-19 10:38:34.814805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.353 [2024-11-19 10:38:34.814832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.353 [2024-11-19 10:38:34.825339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.353 [2024-11-19 10:38:34.825367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.353 [2024-11-19 10:38:34.835688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.353 [2024-11-19 10:38:34.835716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.353 [2024-11-19 10:38:34.846629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.353 [2024-11-19 10:38:34.846671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.353 [2024-11-19 10:38:34.857052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.353 [2024-11-19 10:38:34.857080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.353 [2024-11-19 10:38:34.867836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.353 [2024-11-19 10:38:34.867863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.353 [2024-11-19 10:38:34.880423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.353 [2024-11-19 10:38:34.880452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.353 [2024-11-19 10:38:34.890410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.353 [2024-11-19 10:38:34.890438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.353 [2024-11-19 10:38:34.901313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.353 [2024-11-19 10:38:34.901348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.353 [2024-11-19 10:38:34.912089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.353 [2024-11-19 10:38:34.912117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.353 [2024-11-19 10:38:34.922973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.353 [2024-11-19 10:38:34.923001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.353 [2024-11-19 10:38:34.935703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.353 [2024-11-19 10:38:34.935730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.353 [2024-11-19 10:38:34.947447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.353 [2024-11-19 10:38:34.947475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.353 [2024-11-19 10:38:34.956380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.353 [2024-11-19 10:38:34.956408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.353 [2024-11-19 10:38:34.967671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.353 [2024-11-19 10:38:34.967698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:34.978376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:34.978404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:34.989118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:34.989146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:35.002705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:35.002746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:35.013107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:35.013135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:35.023565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:35.023592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:35.034284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:35.034320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:35.047439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:35.047466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:35.057674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:35.057701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:35.068300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:35.068334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:35.078969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:35.078996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:35.089633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:35.089660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:35.100155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:35.100183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:35.110908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:35.110943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:35.123313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:35.123341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:35.133430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:35.133458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:35.143661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:35.143688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:35.153802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:35.153830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:35.164088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:35.164116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:35.175043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:35.175070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:35.187817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:35.187845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:35.197964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:35.197992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:35.208391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:35.208420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:35.218857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:35.218887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.611 [2024-11-19 10:38:35.229574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.611 [2024-11-19 10:38:35.229611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.870 [2024-11-19 10:38:35.240312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.870 [2024-11-19 10:38:35.240339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.870 [2024-11-19 10:38:35.252893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.870 [2024-11-19 10:38:35.252921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.870 [2024-11-19 10:38:35.264413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.870 [2024-11-19 10:38:35.264440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.870 [2024-11-19 10:38:35.273148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.870 [2024-11-19 10:38:35.273175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.870 [2024-11-19 10:38:35.284859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.870 [2024-11-19 10:38:35.284887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.870 [2024-11-19 10:38:35.295157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.870 [2024-11-19 10:38:35.295185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.870 [2024-11-19 10:38:35.305884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.870 [2024-11-19 10:38:35.305925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.870 [2024-11-19 10:38:35.316288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.870 [2024-11-19 10:38:35.316325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.870 [2024-11-19 10:38:35.327185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.870 [2024-11-19 10:38:35.327212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.870 [2024-11-19 10:38:35.340123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.870 [2024-11-19 10:38:35.340150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.870 [2024-11-19 10:38:35.350228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.870 [2024-11-19 10:38:35.350255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.870 [2024-11-19 10:38:35.360239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.870 [2024-11-19 10:38:35.360265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.870 [2024-11-19 10:38:35.370900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.870 [2024-11-19 10:38:35.370927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.870 [2024-11-19 10:38:35.383755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.870 [2024-11-19 10:38:35.383782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.870 [2024-11-19 10:38:35.395483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.870 [2024-11-19 10:38:35.395510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.870 [2024-11-19 10:38:35.404429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.870 [2024-11-19 10:38:35.404457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.870 11907.25 IOPS, 93.03 MiB/s [2024-11-19T09:38:35.493Z] [2024-11-19 10:38:35.416367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.870 [2024-11-19 10:38:35.416408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.870 [2024-11-19 10:38:35.426932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.870 [2024-11-19 10:38:35.426959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.870 [2024-11-19 10:38:35.438209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.870 [2024-11-19 10:38:35.438237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.870 [2024-11-19 10:38:35.450792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.870 [2024-11-19 10:38:35.450819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.870 [2024-11-19 10:38:35.460725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.870 [2024-11-19 10:38:35.460752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.870 [2024-11-19 10:38:35.471390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.870 [2024-11-19 10:38:35.471418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.870 [2024-11-19 10:38:35.484814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.870 [2024-11-19 10:38:35.484842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.495263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.495290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.505500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.505527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.516436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.516478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.526980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.527008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.537374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.537402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.547723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.547750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.558146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.558174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.568353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.568380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.579011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.579038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.589811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.589854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.602323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.602361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.611954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.611982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.624635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.624663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.636490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.636518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.645354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.645381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.657027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.657054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.667610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.667637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.678704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.678732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.689247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.689275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.699690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.699717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.710332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.710360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.720702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.720751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.731337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.731364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.128 [2024-11-19 10:38:35.742290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.128 [2024-11-19 10:38:35.742325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:35.755555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:35.755582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:35.766114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:35.766141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:35.776901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:35.776928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:35.789886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:35.789913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:35.800198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:35.800225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:35.810999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:35.811026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:35.824184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:35.824212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:35.834469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:35.834497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:35.844967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:35.844995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:35.855992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:35.856019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:35.866402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:35.866430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:35.877202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:35.877229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:35.888341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:35.888368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:35.901189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:35.901216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:35.911387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:35.911414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:35.921943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:35.921970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:35.932597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:35.932632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:35.943474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:35.943502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:35.954360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:35.954387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:35.964684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:35.964711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:35.975276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:35.975312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:35.985758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:35.985787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:35.996168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:35.996196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.386 [2024-11-19 10:38:36.006519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.386 [2024-11-19 10:38:36.006546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.644 [2024-11-19 10:38:36.017094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.645 [2024-11-19 10:38:36.017122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.645 [2024-11-19 10:38:36.027556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.645 [2024-11-19 10:38:36.027584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.645 [2024-11-19 10:38:36.042100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.645 [2024-11-19 10:38:36.042131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.645 [2024-11-19 10:38:36.051932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.645 [2024-11-19 10:38:36.051970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.645 [2024-11-19 10:38:36.062932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.645 [2024-11-19 10:38:36.062975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.645 [2024-11-19 10:38:36.075249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.645 [2024-11-19 10:38:36.075277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.645 [2024-11-19 10:38:36.085761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.645 [2024-11-19 10:38:36.085803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.645 [2024-11-19 10:38:36.096838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.645 [2024-11-19 10:38:36.096866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.645 [2024-11-19 10:38:36.109524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.645 [2024-11-19 10:38:36.109566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.645 [2024-11-19 10:38:36.119469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.645 [2024-11-19 10:38:36.119498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.645 [2024-11-19 10:38:36.130003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.645 [2024-11-19 10:38:36.130031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.645 [2024-11-19 10:38:36.140738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.645 [2024-11-19 10:38:36.140790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.645 [2024-11-19 10:38:36.151147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.645 [2024-11-19 10:38:36.151174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.645 [2024-11-19 10:38:36.161539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.645 [2024-11-19 10:38:36.161566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.645 [2024-11-19 10:38:36.172309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.645 [2024-11-19 10:38:36.172337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.645 [2024-11-19 10:38:36.182910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.645 [2024-11-19 10:38:36.182937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.645 [2024-11-19 10:38:36.193708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.645 [2024-11-19 10:38:36.193750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.645 [2024-11-19 10:38:36.204535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.645 [2024-11-19 10:38:36.204562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.645 [2024-11-19 10:38:36.217421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.645 [2024-11-19 10:38:36.217449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.645 [2024-11-19 10:38:36.227618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.645 [2024-11-19 10:38:36.227646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.645 [2024-11-19 10:38:36.238133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.645 [2024-11-19 10:38:36.238161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.645 [2024-11-19 10:38:36.248949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.645 [2024-11-19 10:38:36.248977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.645 [2024-11-19 10:38:36.259536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.645 [2024-11-19 10:38:36.259563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.903 [2024-11-19 10:38:36.271980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.903 [2024-11-19 10:38:36.272007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.903 [2024-11-19 10:38:36.282085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.903 [2024-11-19 10:38:36.282112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.903 [2024-11-19 10:38:36.292747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.903 [2024-11-19 10:38:36.292775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.903 [2024-11-19 10:38:36.303569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.903 [2024-11-19 10:38:36.303596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.903 [2024-11-19 10:38:36.316717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.903 [2024-11-19 10:38:36.316758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.903 [2024-11-19 10:38:36.326691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.903 [2024-11-19 10:38:36.326718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.903 [2024-11-19 10:38:36.337049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.903 [2024-11-19 10:38:36.337076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.903 [2024-11-19 10:38:36.347819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.903 [2024-11-19 10:38:36.347854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.903 [2024-11-19 10:38:36.358776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.903 [2024-11-19 10:38:36.358804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.903 [2024-11-19 10:38:36.369328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.903 [2024-11-19 10:38:36.369355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.903 [2024-11-19 10:38:36.381725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.903 [2024-11-19 10:38:36.381752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.903 [2024-11-19 10:38:36.392088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.903 [2024-11-19 10:38:36.392115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.903 [2024-11-19 10:38:36.402789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.903 [2024-11-19 10:38:36.402816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.903 11902.80 IOPS, 92.99 MiB/s [2024-11-19T09:38:36.526Z] [2024-11-19 10:38:36.412262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.903 [2024-11-19 10:38:36.412290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.903 00:10:48.903 Latency(us) 00:10:48.903 [2024-11-19T09:38:36.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.903 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:48.903 Nvme1n1 : 5.01 11911.20 93.06 0.00 0.00 10733.48 4514.70 22913.33 00:10:48.903 [2024-11-19T09:38:36.526Z] =================================================================================================================== 00:10:48.903 [2024-11-19T09:38:36.526Z] Total : 11911.20 93.06 0.00 0.00 10733.48 4514.70 22913.33 00:10:48.903 [2024-11-19 10:38:36.418880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.903 [2024-11-19 10:38:36.418903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.903 [2024-11-19 10:38:36.426897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.903 [2024-11-19 10:38:36.426920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.903 [2024-11-19 10:38:36.434926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.903 [2024-11-19 10:38:36.434949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.903 [2024-11-19 10:38:36.443010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.903 [2024-11-19 10:38:36.443058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.903 [2024-11-19 10:38:36.451030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.903 [2024-11-19 10:38:36.451080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.904 [2024-11-19 10:38:36.459040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.904 [2024-11-19 10:38:36.459086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.904 [2024-11-19 10:38:36.467064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.904 [2024-11-19 10:38:36.467108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.904 [2024-11-19 10:38:36.475078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.904 [2024-11-19 10:38:36.475123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.904 [2024-11-19 10:38:36.483115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.904 [2024-11-19 10:38:36.483163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.904 [2024-11-19 10:38:36.491134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.904 [2024-11-19 10:38:36.491182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.904 [2024-11-19 10:38:36.499146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.904 [2024-11-19 10:38:36.499191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.904 [2024-11-19 10:38:36.507173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.904 [2024-11-19 10:38:36.507220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.904 [2024-11-19 10:38:36.515196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.904 [2024-11-19 10:38:36.515241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.904 [2024-11-19 10:38:36.523226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.904 [2024-11-19 10:38:36.523273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.161 [2024-11-19 10:38:36.531244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.161 [2024-11-19 10:38:36.531289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.161 [2024-11-19 10:38:36.539270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.161 [2024-11-19 10:38:36.539325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.161 [2024-11-19 10:38:36.547284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.161 [2024-11-19 10:38:36.547334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.161 [2024-11-19 10:38:36.555318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.161 [2024-11-19 10:38:36.555360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.161 [2024-11-19 10:38:36.563316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.161 [2024-11-19 10:38:36.563350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.161 [2024-11-19 10:38:36.571317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.161 [2024-11-19 10:38:36.571339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.161 [2024-11-19 10:38:36.579335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.161 [2024-11-19 10:38:36.579371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.161 [2024-11-19 10:38:36.587364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.161 [2024-11-19 10:38:36.587387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.161 [2024-11-19 10:38:36.595370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.161 [2024-11-19 10:38:36.595393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.162 [2024-11-19 10:38:36.603451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.162 [2024-11-19 10:38:36.603499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.162 [2024-11-19 10:38:36.611456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.162 [2024-11-19 10:38:36.611501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.162 [2024-11-19 10:38:36.619461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.162 [2024-11-19 10:38:36.619490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.162 [2024-11-19 10:38:36.627461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.162 [2024-11-19 10:38:36.627483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.162 [2024-11-19 10:38:36.635485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.162 [2024-11-19 10:38:36.635514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.162 [2024-11-19 10:38:36.643505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.162 [2024-11-19 10:38:36.643528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1264692) - No such process 00:10:49.162 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1264692 00:10:49.162 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.162 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.162 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.162 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.162 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:49.162 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.162 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.162 delay0 00:10:49.162 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.162 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:49.162 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.162 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.162 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.162 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:49.162 [2024-11-19 10:38:36.770117] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:55.749 [2024-11-19 10:38:42.826770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d033a0 is same with the state(6) to be set 00:10:55.749 Initializing NVMe Controllers 00:10:55.749 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:55.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:55.749 Initialization complete. Launching workers. 00:10:55.749 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 82 00:10:55.749 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 369, failed to submit 33 00:10:55.749 success 236, unsuccessful 133, failed 0 00:10:55.749 10:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:55.749 10:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:55.749 10:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:55.749 10:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:55.749 10:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:55.749 10:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:55.749 10:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:55.749 10:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:55.749 rmmod nvme_tcp 00:10:55.749 rmmod nvme_fabrics 00:10:55.749 rmmod nvme_keyring 00:10:55.749 10:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:55.749 10:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:55.749 10:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:55.749 10:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1263354 ']' 00:10:55.749 10:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1263354 00:10:55.749 10:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1263354 ']' 00:10:55.749 10:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1263354 00:10:55.749 10:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:55.749 10:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.749 10:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1263354 00:10:55.749 10:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:55.749 10:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:55.749 10:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1263354' 00:10:55.749 killing process with pid 1263354 00:10:55.749 10:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1263354 00:10:55.749 10:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1263354 00:10:55.749 10:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:55.749 10:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:55.749 10:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:55.749 10:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:55.749 10:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:55.749 10:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:55.749 10:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:55.749 10:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:55.749 10:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:55.749 10:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.749 10:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.749 10:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.654 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:57.654 00:10:57.654 real 0m28.053s 00:10:57.654 user 0m41.502s 00:10:57.654 sys 0m8.115s 00:10:57.654 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.654 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:57.654 ************************************ 00:10:57.654 END TEST nvmf_zcopy 00:10:57.654 ************************************ 00:10:57.654 10:38:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:57.654 10:38:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:57.654 10:38:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.654 10:38:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:57.654 ************************************ 00:10:57.654 START TEST nvmf_nmic 00:10:57.654 ************************************ 00:10:57.654 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:57.913 * Looking for test storage... 00:10:57.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:57.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.913 --rc genhtml_branch_coverage=1 00:10:57.913 --rc genhtml_function_coverage=1 00:10:57.913 --rc genhtml_legend=1 00:10:57.913 --rc geninfo_all_blocks=1 00:10:57.913 --rc geninfo_unexecuted_blocks=1 00:10:57.913 00:10:57.913 ' 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:57.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.913 --rc genhtml_branch_coverage=1 00:10:57.913 --rc genhtml_function_coverage=1 00:10:57.913 --rc genhtml_legend=1 00:10:57.913 --rc geninfo_all_blocks=1 00:10:57.913 --rc geninfo_unexecuted_blocks=1 00:10:57.913 00:10:57.913 ' 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:57.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.913 --rc genhtml_branch_coverage=1 00:10:57.913 --rc genhtml_function_coverage=1 00:10:57.913 --rc genhtml_legend=1 00:10:57.913 --rc geninfo_all_blocks=1 00:10:57.913 --rc geninfo_unexecuted_blocks=1 00:10:57.913 00:10:57.913 ' 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:57.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.913 --rc genhtml_branch_coverage=1 00:10:57.913 --rc genhtml_function_coverage=1 00:10:57.913 --rc genhtml_legend=1 00:10:57.913 --rc geninfo_all_blocks=1 00:10:57.913 --rc geninfo_unexecuted_blocks=1 00:10:57.913 00:10:57.913 ' 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.913 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:57.914 10:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:00.444 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:00.444 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:00.444 Found net devices under 0000:09:00.0: cvl_0_0 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:00.444 Found net devices under 0000:09:00.1: cvl_0_1 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:00.444 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:00.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:11:00.445 00:11:00.445 --- 10.0.0.2 ping statistics --- 00:11:00.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.445 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:11:00.445 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:11:00.445 00:11:00.445 --- 10.0.0.1 ping statistics --- 00:11:00.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.445 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:11:00.445 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.445 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:00.445 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:00.445 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.445 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:00.445 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:00.445 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.445 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:00.445 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:00.445 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:00.445 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:00.445 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:00.445 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.445 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1268098 00:11:00.445 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:00.445 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1268098 00:11:00.445 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1268098 ']' 00:11:00.445 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.445 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.445 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.445 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.445 10:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.445 [2024-11-19 10:38:47.906129] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:11:00.445 [2024-11-19 10:38:47.906227] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.445 [2024-11-19 10:38:47.976150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.445 [2024-11-19 10:38:48.032924] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.445 [2024-11-19 10:38:48.032975] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.445 [2024-11-19 10:38:48.033003] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.445 [2024-11-19 10:38:48.033014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.445 [2024-11-19 10:38:48.033023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.445 [2024-11-19 10:38:48.034467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.445 [2024-11-19 10:38:48.034531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.445 [2024-11-19 10:38:48.034596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.445 [2024-11-19 10:38:48.034599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.703 [2024-11-19 10:38:48.187081] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.703 Malloc0 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.703 [2024-11-19 10:38:48.262936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:00.703 test case1: single bdev can't be used in multiple subsystems 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.703 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:00.704 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:00.704 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.704 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.704 [2024-11-19 10:38:48.286789] bdev.c:8180:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:00.704 [2024-11-19 10:38:48.286819] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:00.704 [2024-11-19 10:38:48.286833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.704 request: 00:11:00.704 { 00:11:00.704 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:00.704 "namespace": { 00:11:00.704 "bdev_name": "Malloc0", 00:11:00.704 "no_auto_visible": false 00:11:00.704 }, 00:11:00.704 "method": "nvmf_subsystem_add_ns", 00:11:00.704 "req_id": 1 00:11:00.704 } 00:11:00.704 Got JSON-RPC error response 00:11:00.704 response: 00:11:00.704 { 00:11:00.704 "code": -32602, 00:11:00.704 "message": "Invalid parameters" 00:11:00.704 } 00:11:00.704 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:00.704 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:00.704 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:00.704 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:00.704 Adding namespace failed - expected result. 00:11:00.704 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:00.704 test case2: host connect to nvmf target in multiple paths 00:11:00.704 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:00.704 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.704 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.704 [2024-11-19 10:38:48.294903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:00.704 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.704 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:01.637 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:02.202 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:02.202 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:02.202 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:02.202 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:02.202 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:04.099 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:04.099 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:04.099 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:04.099 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:04.099 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:04.099 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:04.099 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:04.099 [global] 00:11:04.099 thread=1 00:11:04.099 invalidate=1 00:11:04.099 rw=write 00:11:04.099 time_based=1 00:11:04.099 runtime=1 00:11:04.099 ioengine=libaio 00:11:04.099 direct=1 00:11:04.099 bs=4096 00:11:04.099 iodepth=1 00:11:04.099 norandommap=0 00:11:04.099 numjobs=1 00:11:04.099 00:11:04.099 verify_dump=1 00:11:04.099 verify_backlog=512 00:11:04.099 verify_state_save=0 00:11:04.099 do_verify=1 00:11:04.099 verify=crc32c-intel 00:11:04.099 [job0] 00:11:04.099 filename=/dev/nvme0n1 00:11:04.099 Could not set queue depth (nvme0n1) 00:11:04.356 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.356 fio-3.35 00:11:04.356 Starting 1 thread 00:11:05.725 00:11:05.725 job0: (groupid=0, jobs=1): err= 0: pid=1268615: Tue Nov 19 10:38:52 2024 00:11:05.725 read: IOPS=1385, BW=5540KiB/s (5673kB/s)(5712KiB/1031msec) 00:11:05.725 slat (nsec): min=4244, max=60424, avg=11012.96, stdev=5668.97 00:11:05.725 clat (usec): min=180, max=42001, avg=523.05, stdev=3421.92 00:11:05.725 lat (usec): min=185, max=42019, avg=534.06, stdev=3423.03 00:11:05.725 clat percentiles (usec): 00:11:05.725 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 212], 00:11:05.725 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 237], 60.00th=[ 243], 00:11:05.725 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 269], 00:11:05.725 | 99.00th=[ 306], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:05.725 | 99.99th=[42206] 00:11:05.725 write: IOPS=1489, BW=5959KiB/s (6102kB/s)(6144KiB/1031msec); 0 zone resets 00:11:05.725 slat (nsec): min=5358, max=54290, avg=13705.15, stdev=6281.24 00:11:05.725 clat (usec): min=121, max=276, avg=153.78, stdev=15.85 00:11:05.725 lat (usec): min=127, max=303, avg=167.49, stdev=19.69 00:11:05.725 clat percentiles (usec): 00:11:05.725 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:11:05.725 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 155], 00:11:05.725 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 182], 00:11:05.725 | 99.00th=[ 192], 99.50th=[ 204], 99.90th=[ 215], 99.95th=[ 277], 00:11:05.725 | 99.99th=[ 277] 00:11:05.725 bw ( KiB/s): min= 4096, max= 8192, per=100.00%, avg=6144.00, stdev=2896.31, samples=2 00:11:05.725 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:11:05.725 lat (usec) : 250=87.38%, 500=12.25% 00:11:05.725 lat (msec) : 4=0.03%, 50=0.34% 00:11:05.725 cpu : usr=2.62%, sys=4.56%, ctx=2964, majf=0, minf=1 00:11:05.725 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.725 issued rwts: total=1428,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.725 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.725 00:11:05.725 Run status group 0 (all jobs): 00:11:05.725 READ: bw=5540KiB/s (5673kB/s), 5540KiB/s-5540KiB/s (5673kB/s-5673kB/s), io=5712KiB (5849kB), run=1031-1031msec 00:11:05.725 WRITE: bw=5959KiB/s (6102kB/s), 5959KiB/s-5959KiB/s (6102kB/s-6102kB/s), io=6144KiB (6291kB), run=1031-1031msec 00:11:05.725 00:11:05.725 Disk stats (read/write): 00:11:05.725 nvme0n1: ios=1474/1536, merge=0/0, ticks=799/218, in_queue=1017, util=95.69% 00:11:05.725 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:05.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:05.725 rmmod nvme_tcp 00:11:05.725 rmmod nvme_fabrics 00:11:05.725 rmmod nvme_keyring 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1268098 ']' 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1268098 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1268098 ']' 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1268098 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1268098 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1268098' 00:11:05.725 killing process with pid 1268098 00:11:05.725 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1268098 00:11:05.726 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1268098 00:11:06.003 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:06.003 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:06.003 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:06.003 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:06.003 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:06.003 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:06.003 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:06.003 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:06.003 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:06.003 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.003 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.003 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.972 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:07.972 00:11:07.972 real 0m10.249s 00:11:07.972 user 0m22.758s 00:11:07.972 sys 0m2.567s 00:11:07.972 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.972 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.972 ************************************ 00:11:07.972 END TEST nvmf_nmic 00:11:07.972 ************************************ 00:11:07.972 10:38:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:07.972 10:38:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:07.972 10:38:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.972 10:38:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:07.972 ************************************ 00:11:07.972 START TEST nvmf_fio_target 00:11:07.972 ************************************ 00:11:07.972 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:08.232 * Looking for test storage... 00:11:08.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:08.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.232 --rc genhtml_branch_coverage=1 00:11:08.232 --rc genhtml_function_coverage=1 00:11:08.232 --rc genhtml_legend=1 00:11:08.232 --rc geninfo_all_blocks=1 00:11:08.232 --rc geninfo_unexecuted_blocks=1 00:11:08.232 00:11:08.232 ' 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:08.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.232 --rc genhtml_branch_coverage=1 00:11:08.232 --rc genhtml_function_coverage=1 00:11:08.232 --rc genhtml_legend=1 00:11:08.232 --rc geninfo_all_blocks=1 00:11:08.232 --rc geninfo_unexecuted_blocks=1 00:11:08.232 00:11:08.232 ' 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:08.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.232 --rc genhtml_branch_coverage=1 00:11:08.232 --rc genhtml_function_coverage=1 00:11:08.232 --rc genhtml_legend=1 00:11:08.232 --rc geninfo_all_blocks=1 00:11:08.232 --rc geninfo_unexecuted_blocks=1 00:11:08.232 00:11:08.232 ' 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:08.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.232 --rc genhtml_branch_coverage=1 00:11:08.232 --rc genhtml_function_coverage=1 00:11:08.232 --rc genhtml_legend=1 00:11:08.232 --rc geninfo_all_blocks=1 00:11:08.232 --rc geninfo_unexecuted_blocks=1 00:11:08.232 00:11:08.232 ' 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.232 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:08.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:08.233 10:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.766 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:10.766 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:10.766 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:10.766 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:10.766 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:10.766 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:10.766 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:10.766 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:10.766 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:10.766 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:10.766 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:10.766 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:10.766 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:10.766 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:10.766 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:10.767 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:10.767 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:10.767 Found net devices under 0000:09:00.0: cvl_0_0 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:10.767 Found net devices under 0000:09:00.1: cvl_0_1 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:10.767 10:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:10.767 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:10.767 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:10.767 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:10.767 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:10.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:11:10.768 00:11:10.768 --- 10.0.0.2 ping statistics --- 00:11:10.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.768 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:11:10.768 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:10.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:11:10.768 00:11:10.768 --- 10.0.0.1 ping statistics --- 00:11:10.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.768 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:11:10.768 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.768 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:10.768 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:10.768 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.768 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:10.768 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:10.768 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.768 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:10.768 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:10.768 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:10.768 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:10.768 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:10.768 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.768 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1270821 00:11:10.768 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1270821 00:11:10.768 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1270821 ']' 00:11:10.768 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.768 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:10.768 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.768 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.768 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.768 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.768 [2024-11-19 10:38:58.133766] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:11:10.768 [2024-11-19 10:38:58.133856] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.768 [2024-11-19 10:38:58.209187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:10.768 [2024-11-19 10:38:58.270424] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.768 [2024-11-19 10:38:58.270475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.768 [2024-11-19 10:38:58.270505] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.768 [2024-11-19 10:38:58.270517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.768 [2024-11-19 10:38:58.270526] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.768 [2024-11-19 10:38:58.272084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.768 [2024-11-19 10:38:58.272109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.768 [2024-11-19 10:38:58.272169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.768 [2024-11-19 10:38:58.272172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.027 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.027 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:11.027 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:11.027 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:11.027 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.027 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.027 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:11.285 [2024-11-19 10:38:58.709200] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.285 10:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:11.543 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:11.543 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:11.800 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:11.800 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.365 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:12.365 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.622 10:39:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:12.622 10:39:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:12.879 10:39:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.137 10:39:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:13.137 10:39:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.396 10:39:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:13.396 10:39:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.654 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:13.654 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:14.223 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:14.223 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:14.223 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:14.789 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:14.789 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.789 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.047 [2024-11-19 10:39:02.663057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.306 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:15.565 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:15.823 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:16.390 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:16.390 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:16.390 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.390 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:16.390 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:16.390 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:18.917 10:39:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:18.917 10:39:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:18.917 10:39:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:18.917 10:39:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:18.917 10:39:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:18.917 10:39:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:18.917 10:39:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:18.917 [global] 00:11:18.917 thread=1 00:11:18.917 invalidate=1 00:11:18.917 rw=write 00:11:18.917 time_based=1 00:11:18.917 runtime=1 00:11:18.917 ioengine=libaio 00:11:18.917 direct=1 00:11:18.917 bs=4096 00:11:18.917 iodepth=1 00:11:18.917 norandommap=0 00:11:18.917 numjobs=1 00:11:18.917 00:11:18.917 verify_dump=1 00:11:18.917 verify_backlog=512 00:11:18.917 verify_state_save=0 00:11:18.917 do_verify=1 00:11:18.917 verify=crc32c-intel 00:11:18.917 [job0] 00:11:18.917 filename=/dev/nvme0n1 00:11:18.917 [job1] 00:11:18.917 filename=/dev/nvme0n2 00:11:18.917 [job2] 00:11:18.917 filename=/dev/nvme0n3 00:11:18.917 [job3] 00:11:18.917 filename=/dev/nvme0n4 00:11:18.917 Could not set queue depth (nvme0n1) 00:11:18.917 Could not set queue depth (nvme0n2) 00:11:18.917 Could not set queue depth (nvme0n3) 00:11:18.917 Could not set queue depth (nvme0n4) 00:11:18.917 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.917 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.917 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.917 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.917 fio-3.35 00:11:18.917 Starting 4 threads 00:11:19.849 00:11:19.849 job0: (groupid=0, jobs=1): err= 0: pid=1272023: Tue Nov 19 10:39:07 2024 00:11:19.849 read: IOPS=21, BW=86.2KiB/s (88.3kB/s)(88.0KiB/1021msec) 00:11:19.849 slat (nsec): min=6339, max=35380, avg=18926.27, stdev=9234.37 00:11:19.849 clat (usec): min=40925, max=41023, avg=40972.01, stdev=22.53 00:11:19.849 lat (usec): min=40939, max=41037, avg=40990.94, stdev=20.64 00:11:19.849 clat percentiles (usec): 00:11:19.849 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:19.849 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:19.849 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:19.849 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:19.849 | 99.99th=[41157] 00:11:19.849 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:11:19.849 slat (usec): min=6, max=18125, avg=59.04, stdev=828.02 00:11:19.849 clat (usec): min=128, max=3710, avg=170.14, stdev=157.19 00:11:19.849 lat (usec): min=138, max=18324, avg=229.19, stdev=844.23 00:11:19.849 clat percentiles (usec): 00:11:19.849 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 155], 00:11:19.849 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 165], 00:11:19.849 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 184], 00:11:19.849 | 99.00th=[ 196], 99.50th=[ 200], 99.90th=[ 3720], 99.95th=[ 3720], 00:11:19.849 | 99.99th=[ 3720] 00:11:19.849 bw ( KiB/s): min= 4087, max= 4087, per=20.38%, avg=4087.00, stdev= 0.00, samples=1 00:11:19.849 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:19.849 lat (usec) : 250=95.69% 00:11:19.849 lat (msec) : 4=0.19%, 50=4.12% 00:11:19.849 cpu : usr=0.49%, sys=0.49%, ctx=539, majf=0, minf=1 00:11:19.849 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.849 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.849 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.849 job1: (groupid=0, jobs=1): err= 0: pid=1272024: Tue Nov 19 10:39:07 2024 00:11:19.849 read: IOPS=1024, BW=4099KiB/s (4198kB/s)(4132KiB/1008msec) 00:11:19.849 slat (nsec): min=5420, max=35182, avg=11882.03, stdev=5691.89 00:11:19.849 clat (usec): min=217, max=41015, avg=655.21, stdev=3979.04 00:11:19.849 lat (usec): min=224, max=41030, avg=667.10, stdev=3980.14 00:11:19.849 clat percentiles (usec): 00:11:19.849 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 247], 00:11:19.849 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 269], 00:11:19.849 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 289], 00:11:19.849 | 99.00th=[ 437], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:19.849 | 99.99th=[41157] 00:11:19.849 write: IOPS=1523, BW=6095KiB/s (6242kB/s)(6144KiB/1008msec); 0 zone resets 00:11:19.849 slat (nsec): min=6909, max=77170, avg=14770.08, stdev=6876.74 00:11:19.849 clat (usec): min=131, max=242, avg=185.66, stdev=17.90 00:11:19.849 lat (usec): min=139, max=268, avg=200.43, stdev=21.44 00:11:19.849 clat percentiles (usec): 00:11:19.849 | 1.00th=[ 145], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 172], 00:11:19.849 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 190], 00:11:19.849 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 217], 00:11:19.849 | 99.00th=[ 231], 99.50th=[ 235], 99.90th=[ 243], 99.95th=[ 243], 00:11:19.849 | 99.99th=[ 243] 00:11:19.849 bw ( KiB/s): min= 4096, max= 8175, per=30.59%, avg=6135.50, stdev=2884.29, samples=2 00:11:19.849 iops : min= 1024, max= 2043, avg=1533.50, stdev=720.54, samples=2 00:11:19.849 lat (usec) : 250=70.18%, 500=29.43% 00:11:19.849 lat (msec) : 50=0.39% 00:11:19.849 cpu : usr=2.58%, sys=4.77%, ctx=2570, majf=0, minf=1 00:11:19.849 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.850 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.850 issued rwts: total=1033,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.850 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.850 job2: (groupid=0, jobs=1): err= 0: pid=1272025: Tue Nov 19 10:39:07 2024 00:11:19.850 read: IOPS=852, BW=3409KiB/s (3491kB/s)(3436KiB/1008msec) 00:11:19.850 slat (nsec): min=4492, max=64886, avg=17982.37, stdev=10767.68 00:11:19.850 clat (usec): min=206, max=41901, avg=826.10, stdev=4374.02 00:11:19.850 lat (usec): min=217, max=41952, avg=844.08, stdev=4374.88 00:11:19.850 clat percentiles (usec): 00:11:19.850 | 1.00th=[ 225], 5.00th=[ 239], 10.00th=[ 265], 20.00th=[ 310], 00:11:19.850 | 30.00th=[ 322], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 367], 00:11:19.850 | 70.00th=[ 379], 80.00th=[ 396], 90.00th=[ 429], 95.00th=[ 469], 00:11:19.850 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:11:19.850 | 99.99th=[41681] 00:11:19.850 write: IOPS=1015, BW=4063KiB/s (4161kB/s)(4096KiB/1008msec); 0 zone resets 00:11:19.850 slat (nsec): min=5938, max=59883, avg=14865.76, stdev=8715.75 00:11:19.850 clat (usec): min=152, max=455, avg=252.56, stdev=55.57 00:11:19.850 lat (usec): min=162, max=482, avg=267.43, stdev=58.81 00:11:19.850 clat percentiles (usec): 00:11:19.850 | 1.00th=[ 165], 5.00th=[ 186], 10.00th=[ 196], 20.00th=[ 212], 00:11:19.850 | 30.00th=[ 221], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 247], 00:11:19.850 | 70.00th=[ 258], 80.00th=[ 302], 90.00th=[ 338], 95.00th=[ 367], 00:11:19.850 | 99.00th=[ 404], 99.50th=[ 420], 99.90th=[ 441], 99.95th=[ 457], 00:11:19.850 | 99.99th=[ 457] 00:11:19.850 bw ( KiB/s): min= 8175, max= 8175, per=40.76%, avg=8175.00, stdev= 0.00, samples=1 00:11:19.850 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:11:19.850 lat (usec) : 250=38.45%, 500=60.12%, 750=0.90% 00:11:19.850 lat (msec) : 50=0.53% 00:11:19.850 cpu : usr=2.38%, sys=2.38%, ctx=1883, majf=0, minf=2 00:11:19.850 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.850 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.850 issued rwts: total=859,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.850 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.850 job3: (groupid=0, jobs=1): err= 0: pid=1272026: Tue Nov 19 10:39:07 2024 00:11:19.850 read: IOPS=1800, BW=7201KiB/s (7374kB/s)(7208KiB/1001msec) 00:11:19.850 slat (nsec): min=4370, max=70541, avg=14452.56, stdev=8966.13 00:11:19.850 clat (usec): min=185, max=1205, avg=294.84, stdev=96.35 00:11:19.850 lat (usec): min=191, max=1216, avg=309.29, stdev=100.05 00:11:19.850 clat percentiles (usec): 00:11:19.850 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 212], 00:11:19.850 | 30.00th=[ 223], 40.00th=[ 237], 50.00th=[ 253], 60.00th=[ 306], 00:11:19.850 | 70.00th=[ 343], 80.00th=[ 375], 90.00th=[ 429], 95.00th=[ 482], 00:11:19.850 | 99.00th=[ 570], 99.50th=[ 619], 99.90th=[ 693], 99.95th=[ 1205], 00:11:19.850 | 99.99th=[ 1205] 00:11:19.850 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:19.850 slat (nsec): min=5680, max=50665, avg=13076.80, stdev=6114.13 00:11:19.850 clat (usec): min=134, max=461, avg=195.87, stdev=54.21 00:11:19.850 lat (usec): min=140, max=479, avg=208.95, stdev=55.51 00:11:19.850 clat percentiles (usec): 00:11:19.850 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 151], 00:11:19.850 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 174], 60.00th=[ 206], 00:11:19.850 | 70.00th=[ 223], 80.00th=[ 239], 90.00th=[ 260], 95.00th=[ 302], 00:11:19.850 | 99.00th=[ 379], 99.50th=[ 396], 99.90th=[ 453], 99.95th=[ 453], 00:11:19.850 | 99.99th=[ 461] 00:11:19.850 bw ( KiB/s): min= 8175, max= 8175, per=40.76%, avg=8175.00, stdev= 0.00, samples=1 00:11:19.850 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:11:19.850 lat (usec) : 250=68.57%, 500=29.56%, 750=1.84% 00:11:19.850 lat (msec) : 2=0.03% 00:11:19.850 cpu : usr=2.20%, sys=6.10%, ctx=3850, majf=0, minf=1 00:11:19.850 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.850 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.850 issued rwts: total=1802,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.850 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.850 00:11:19.850 Run status group 0 (all jobs): 00:11:19.850 READ: bw=14.2MiB/s (14.9MB/s), 86.2KiB/s-7201KiB/s (88.3kB/s-7374kB/s), io=14.5MiB (15.2MB), run=1001-1021msec 00:11:19.850 WRITE: bw=19.6MiB/s (20.5MB/s), 2006KiB/s-8184KiB/s (2054kB/s-8380kB/s), io=20.0MiB (21.0MB), run=1001-1021msec 00:11:19.850 00:11:19.850 Disk stats (read/write): 00:11:19.850 nvme0n1: ios=61/512, merge=0/0, ticks=1250/83, in_queue=1333, util=98.80% 00:11:19.850 nvme0n2: ios=1042/1536, merge=0/0, ticks=506/264, in_queue=770, util=86.78% 00:11:19.850 nvme0n3: ios=861/1024, merge=0/0, ticks=789/251, in_queue=1040, util=90.80% 00:11:19.850 nvme0n4: ios=1536/1542, merge=0/0, ticks=455/310, in_queue=765, util=89.68% 00:11:19.850 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:19.850 [global] 00:11:19.850 thread=1 00:11:19.850 invalidate=1 00:11:19.850 rw=randwrite 00:11:19.850 time_based=1 00:11:19.850 runtime=1 00:11:19.850 ioengine=libaio 00:11:19.850 direct=1 00:11:19.850 bs=4096 00:11:19.850 iodepth=1 00:11:19.850 norandommap=0 00:11:19.850 numjobs=1 00:11:19.850 00:11:19.850 verify_dump=1 00:11:19.850 verify_backlog=512 00:11:19.850 verify_state_save=0 00:11:19.850 do_verify=1 00:11:19.850 verify=crc32c-intel 00:11:19.850 [job0] 00:11:19.850 filename=/dev/nvme0n1 00:11:19.850 [job1] 00:11:19.850 filename=/dev/nvme0n2 00:11:19.850 [job2] 00:11:19.850 filename=/dev/nvme0n3 00:11:19.850 [job3] 00:11:19.850 filename=/dev/nvme0n4 00:11:19.850 Could not set queue depth (nvme0n1) 00:11:19.850 Could not set queue depth (nvme0n2) 00:11:19.850 Could not set queue depth (nvme0n3) 00:11:19.850 Could not set queue depth (nvme0n4) 00:11:20.108 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:20.108 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:20.108 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:20.108 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:20.108 fio-3.35 00:11:20.108 Starting 4 threads 00:11:21.481 00:11:21.481 job0: (groupid=0, jobs=1): err= 0: pid=1272406: Tue Nov 19 10:39:08 2024 00:11:21.481 read: IOPS=280, BW=1122KiB/s (1149kB/s)(1124KiB/1002msec) 00:11:21.481 slat (nsec): min=4763, max=57139, avg=13148.16, stdev=7439.38 00:11:21.481 clat (usec): min=175, max=42008, avg=3094.34, stdev=10376.51 00:11:21.481 lat (usec): min=189, max=42023, avg=3107.49, stdev=10378.38 00:11:21.481 clat percentiles (usec): 00:11:21.481 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 206], 00:11:21.481 | 30.00th=[ 253], 40.00th=[ 273], 50.00th=[ 293], 60.00th=[ 314], 00:11:21.481 | 70.00th=[ 338], 80.00th=[ 453], 90.00th=[ 529], 95.00th=[41157], 00:11:21.481 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:21.481 | 99.99th=[42206] 00:11:21.481 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:11:21.481 slat (nsec): min=5658, max=41779, avg=8981.32, stdev=3786.90 00:11:21.481 clat (usec): min=152, max=713, avg=235.79, stdev=43.07 00:11:21.481 lat (usec): min=163, max=727, avg=244.77, stdev=42.80 00:11:21.481 clat percentiles (usec): 00:11:21.481 | 1.00th=[ 163], 5.00th=[ 188], 10.00th=[ 200], 20.00th=[ 210], 00:11:21.481 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 241], 00:11:21.481 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 293], 00:11:21.481 | 99.00th=[ 343], 99.50th=[ 396], 99.90th=[ 717], 99.95th=[ 717], 00:11:21.481 | 99.99th=[ 717] 00:11:21.481 bw ( KiB/s): min= 4096, max= 4096, per=48.46%, avg=4096.00, stdev= 0.00, samples=1 00:11:21.481 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:21.481 lat (usec) : 250=59.14%, 500=36.07%, 750=2.40% 00:11:21.481 lat (msec) : 50=2.40% 00:11:21.481 cpu : usr=0.30%, sys=1.00%, ctx=794, majf=0, minf=1 00:11:21.481 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.481 issued rwts: total=281,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.481 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.481 job1: (groupid=0, jobs=1): err= 0: pid=1272412: Tue Nov 19 10:39:08 2024 00:11:21.481 read: IOPS=486, BW=1944KiB/s (1991kB/s)(1952KiB/1004msec) 00:11:21.481 slat (nsec): min=6028, max=48209, avg=10623.69, stdev=5012.88 00:11:21.481 clat (usec): min=199, max=40986, avg=1815.29, stdev=7601.35 00:11:21.481 lat (usec): min=208, max=41001, avg=1825.91, stdev=7602.07 00:11:21.481 clat percentiles (usec): 00:11:21.481 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 227], 00:11:21.481 | 30.00th=[ 237], 40.00th=[ 265], 50.00th=[ 400], 60.00th=[ 408], 00:11:21.481 | 70.00th=[ 416], 80.00th=[ 424], 90.00th=[ 437], 95.00th=[ 453], 00:11:21.481 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:21.481 | 99.99th=[41157] 00:11:21.481 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:11:21.481 slat (nsec): min=6136, max=38996, avg=10135.51, stdev=4321.04 00:11:21.481 clat (usec): min=138, max=819, avg=203.44, stdev=43.68 00:11:21.481 lat (usec): min=145, max=828, avg=213.58, stdev=43.55 00:11:21.481 clat percentiles (usec): 00:11:21.481 | 1.00th=[ 149], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 176], 00:11:21.481 | 30.00th=[ 184], 40.00th=[ 194], 50.00th=[ 202], 60.00th=[ 208], 00:11:21.481 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 237], 95.00th=[ 249], 00:11:21.481 | 99.00th=[ 318], 99.50th=[ 412], 99.90th=[ 824], 99.95th=[ 824], 00:11:21.481 | 99.99th=[ 824] 00:11:21.481 bw ( KiB/s): min= 4096, max= 4096, per=48.46%, avg=4096.00, stdev= 0.00, samples=1 00:11:21.481 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:21.481 lat (usec) : 250=67.00%, 500=31.00%, 750=0.10%, 1000=0.10% 00:11:21.482 lat (msec) : 50=1.80% 00:11:21.482 cpu : usr=0.40%, sys=1.69%, ctx=1001, majf=0, minf=1 00:11:21.482 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.482 issued rwts: total=488,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.482 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.482 job2: (groupid=0, jobs=1): err= 0: pid=1272447: Tue Nov 19 10:39:08 2024 00:11:21.482 read: IOPS=229, BW=917KiB/s (939kB/s)(924KiB/1008msec) 00:11:21.482 slat (nsec): min=5481, max=26804, avg=7871.39, stdev=3620.57 00:11:21.482 clat (usec): min=202, max=41211, avg=3788.92, stdev=11452.23 00:11:21.482 lat (usec): min=210, max=41220, avg=3796.79, stdev=11454.16 00:11:21.482 clat percentiles (usec): 00:11:21.482 | 1.00th=[ 208], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 243], 00:11:21.482 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 265], 00:11:21.482 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 347], 95.00th=[41157], 00:11:21.482 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:21.482 | 99.99th=[41157] 00:11:21.482 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:11:21.482 slat (nsec): min=7354, max=53106, avg=10312.14, stdev=3909.70 00:11:21.482 clat (usec): min=160, max=1285, avg=239.53, stdev=120.57 00:11:21.482 lat (usec): min=168, max=1299, avg=249.85, stdev=121.07 00:11:21.482 clat percentiles (usec): 00:11:21.482 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 190], 00:11:21.482 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:11:21.482 | 70.00th=[ 227], 80.00th=[ 245], 90.00th=[ 285], 95.00th=[ 371], 00:11:21.482 | 99.00th=[ 807], 99.50th=[ 1037], 99.90th=[ 1287], 99.95th=[ 1287], 00:11:21.482 | 99.99th=[ 1287] 00:11:21.482 bw ( KiB/s): min= 4096, max= 4096, per=48.46%, avg=4096.00, stdev= 0.00, samples=1 00:11:21.482 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:21.482 lat (usec) : 250=67.03%, 500=27.19%, 750=1.75%, 1000=0.81% 00:11:21.482 lat (msec) : 2=0.40%, 4=0.13%, 50=2.69% 00:11:21.482 cpu : usr=0.50%, sys=0.50%, ctx=744, majf=0, minf=1 00:11:21.482 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.482 issued rwts: total=231,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.482 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.482 job3: (groupid=0, jobs=1): err= 0: pid=1272457: Tue Nov 19 10:39:08 2024 00:11:21.482 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:21.482 slat (nsec): min=4716, max=46984, avg=12559.53, stdev=7805.00 00:11:21.482 clat (usec): min=189, max=41037, avg=1655.16, stdev=7295.49 00:11:21.482 lat (usec): min=195, max=41052, avg=1667.72, stdev=7296.98 00:11:21.482 clat percentiles (usec): 00:11:21.482 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 215], 00:11:21.482 | 30.00th=[ 227], 40.00th=[ 239], 50.00th=[ 251], 60.00th=[ 285], 00:11:21.482 | 70.00th=[ 347], 80.00th=[ 461], 90.00th=[ 502], 95.00th=[ 537], 00:11:21.482 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:21.482 | 99.99th=[41157] 00:11:21.482 write: IOPS=593, BW=2374KiB/s (2431kB/s)(2376KiB/1001msec); 0 zone resets 00:11:21.482 slat (nsec): min=6464, max=45307, avg=11262.96, stdev=4633.57 00:11:21.482 clat (usec): min=137, max=444, avg=227.88, stdev=48.87 00:11:21.482 lat (usec): min=150, max=463, avg=239.14, stdev=48.45 00:11:21.482 clat percentiles (usec): 00:11:21.482 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 155], 20.00th=[ 188], 00:11:21.482 | 30.00th=[ 212], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 239], 00:11:21.482 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 310], 00:11:21.482 | 99.00th=[ 383], 99.50th=[ 420], 99.90th=[ 445], 99.95th=[ 445], 00:11:21.482 | 99.99th=[ 445] 00:11:21.482 bw ( KiB/s): min= 4096, max= 4096, per=48.46%, avg=4096.00, stdev= 0.00, samples=1 00:11:21.482 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:21.482 lat (usec) : 250=62.75%, 500=32.46%, 750=3.25% 00:11:21.482 lat (msec) : 50=1.54% 00:11:21.482 cpu : usr=0.60%, sys=1.40%, ctx=1109, majf=0, minf=1 00:11:21.482 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.482 issued rwts: total=512,594,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.482 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.482 00:11:21.482 Run status group 0 (all jobs): 00:11:21.482 READ: bw=6000KiB/s (6144kB/s), 917KiB/s-2046KiB/s (939kB/s-2095kB/s), io=6048KiB (6193kB), run=1001-1008msec 00:11:21.482 WRITE: bw=8452KiB/s (8655kB/s), 2032KiB/s-2374KiB/s (2081kB/s-2431kB/s), io=8520KiB (8724kB), run=1001-1008msec 00:11:21.482 00:11:21.482 Disk stats (read/write): 00:11:21.482 nvme0n1: ios=300/512, merge=0/0, ticks=748/123, in_queue=871, util=85.77% 00:11:21.482 nvme0n2: ios=529/512, merge=0/0, ticks=868/101, in_queue=969, util=98.88% 00:11:21.482 nvme0n3: ios=281/512, merge=0/0, ticks=1411/121, in_queue=1532, util=97.60% 00:11:21.482 nvme0n4: ios=526/512, merge=0/0, ticks=841/117, in_queue=958, util=97.68% 00:11:21.482 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:21.482 [global] 00:11:21.482 thread=1 00:11:21.482 invalidate=1 00:11:21.482 rw=write 00:11:21.482 time_based=1 00:11:21.482 runtime=1 00:11:21.482 ioengine=libaio 00:11:21.482 direct=1 00:11:21.482 bs=4096 00:11:21.482 iodepth=128 00:11:21.482 norandommap=0 00:11:21.482 numjobs=1 00:11:21.482 00:11:21.482 verify_dump=1 00:11:21.482 verify_backlog=512 00:11:21.482 verify_state_save=0 00:11:21.482 do_verify=1 00:11:21.482 verify=crc32c-intel 00:11:21.482 [job0] 00:11:21.482 filename=/dev/nvme0n1 00:11:21.482 [job1] 00:11:21.482 filename=/dev/nvme0n2 00:11:21.482 [job2] 00:11:21.482 filename=/dev/nvme0n3 00:11:21.482 [job3] 00:11:21.482 filename=/dev/nvme0n4 00:11:21.482 Could not set queue depth (nvme0n1) 00:11:21.482 Could not set queue depth (nvme0n2) 00:11:21.482 Could not set queue depth (nvme0n3) 00:11:21.482 Could not set queue depth (nvme0n4) 00:11:21.482 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.482 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.482 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.482 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.482 fio-3.35 00:11:21.482 Starting 4 threads 00:11:22.859 00:11:22.859 job0: (groupid=0, jobs=1): err= 0: pid=1273096: Tue Nov 19 10:39:10 2024 00:11:22.859 read: IOPS=3301, BW=12.9MiB/s (13.5MB/s)(13.0MiB/1007msec) 00:11:22.859 slat (usec): min=2, max=17454, avg=128.31, stdev=918.73 00:11:22.860 clat (usec): min=1725, max=66133, avg=15351.71, stdev=9120.11 00:11:22.860 lat (usec): min=1733, max=66140, avg=15480.02, stdev=9216.46 00:11:22.860 clat percentiles (usec): 00:11:22.860 | 1.00th=[ 3032], 5.00th=[ 7570], 10.00th=[ 9896], 20.00th=[10290], 00:11:22.860 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12649], 60.00th=[15270], 00:11:22.860 | 70.00th=[16712], 80.00th=[17433], 90.00th=[22676], 95.00th=[31589], 00:11:22.860 | 99.00th=[60031], 99.50th=[61604], 99.90th=[66323], 99.95th=[66323], 00:11:22.860 | 99.99th=[66323] 00:11:22.860 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:11:22.860 slat (usec): min=3, max=10088, avg=139.66, stdev=759.24 00:11:22.860 clat (usec): min=217, max=66088, avg=21318.73, stdev=17774.23 00:11:22.860 lat (usec): min=458, max=66098, avg=21458.39, stdev=17893.91 00:11:22.860 clat percentiles (usec): 00:11:22.860 | 1.00th=[ 2057], 5.00th=[ 4490], 10.00th=[ 5211], 20.00th=[ 8356], 00:11:22.860 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11731], 60.00th=[15664], 00:11:22.860 | 70.00th=[24511], 80.00th=[40109], 90.00th=[54264], 95.00th=[57410], 00:11:22.860 | 99.00th=[61604], 99.50th=[63701], 99.90th=[64750], 99.95th=[66323], 00:11:22.860 | 99.99th=[66323] 00:11:22.860 bw ( KiB/s): min=11824, max=16848, per=22.88%, avg=14336.00, stdev=3552.50, samples=2 00:11:22.860 iops : min= 2956, max= 4212, avg=3584.00, stdev=888.13, samples=2 00:11:22.860 lat (usec) : 250=0.01%, 500=0.04%, 750=0.07% 00:11:22.860 lat (msec) : 2=0.36%, 4=3.33%, 10=15.65%, 20=57.26%, 50=14.36% 00:11:22.860 lat (msec) : 100=8.92% 00:11:22.860 cpu : usr=4.57%, sys=4.37%, ctx=330, majf=0, minf=1 00:11:22.860 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:22.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.860 issued rwts: total=3325,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.860 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.860 job1: (groupid=0, jobs=1): err= 0: pid=1273100: Tue Nov 19 10:39:10 2024 00:11:22.860 read: IOPS=4600, BW=18.0MiB/s (18.8MB/s)(18.7MiB/1043msec) 00:11:22.860 slat (usec): min=2, max=12050, avg=108.40, stdev=775.20 00:11:22.860 clat (usec): min=3978, max=61828, avg=14620.42, stdev=8230.00 00:11:22.860 lat (usec): min=3984, max=68104, avg=14728.83, stdev=8256.72 00:11:22.860 clat percentiles (usec): 00:11:22.860 | 1.00th=[ 4490], 5.00th=[ 9372], 10.00th=[10552], 20.00th=[11076], 00:11:22.860 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11994], 60.00th=[12387], 00:11:22.860 | 70.00th=[13435], 80.00th=[16909], 90.00th=[21103], 95.00th=[25822], 00:11:22.860 | 99.00th=[61604], 99.50th=[61604], 99.90th=[61604], 99.95th=[61604], 00:11:22.860 | 99.99th=[61604] 00:11:22.860 write: IOPS=4908, BW=19.2MiB/s (20.1MB/s)(20.0MiB/1043msec); 0 zone resets 00:11:22.860 slat (usec): min=4, max=18283, avg=86.67, stdev=568.86 00:11:22.860 clat (usec): min=1259, max=25730, avg=11740.86, stdev=2969.89 00:11:22.860 lat (usec): min=1275, max=25738, avg=11827.53, stdev=3007.66 00:11:22.860 clat percentiles (usec): 00:11:22.860 | 1.00th=[ 3064], 5.00th=[ 5669], 10.00th=[ 8455], 20.00th=[10421], 00:11:22.860 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11994], 60.00th=[12256], 00:11:22.860 | 70.00th=[12518], 80.00th=[12780], 90.00th=[14746], 95.00th=[16581], 00:11:22.860 | 99.00th=[21103], 99.50th=[21890], 99.90th=[23462], 99.95th=[25560], 00:11:22.860 | 99.99th=[25822] 00:11:22.860 bw ( KiB/s): min=20480, max=20480, per=32.69%, avg=20480.00, stdev= 0.00, samples=2 00:11:22.860 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:11:22.860 lat (msec) : 2=0.07%, 4=1.16%, 10=10.53%, 20=81.43%, 50=5.55% 00:11:22.860 lat (msec) : 100=1.27% 00:11:22.860 cpu : usr=4.03%, sys=5.95%, ctx=566, majf=0, minf=1 00:11:22.860 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:22.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.860 issued rwts: total=4798,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.860 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.860 job2: (groupid=0, jobs=1): err= 0: pid=1273102: Tue Nov 19 10:39:10 2024 00:11:22.860 read: IOPS=2874, BW=11.2MiB/s (11.8MB/s)(11.7MiB/1045msec) 00:11:22.860 slat (usec): min=2, max=13345, avg=137.05, stdev=782.31 00:11:22.860 clat (usec): min=8289, max=69584, avg=20017.46, stdev=10037.09 00:11:22.860 lat (usec): min=8296, max=77808, avg=20154.52, stdev=10084.25 00:11:22.860 clat percentiles (usec): 00:11:22.860 | 1.00th=[11076], 5.00th=[13829], 10.00th=[14484], 20.00th=[15008], 00:11:22.860 | 30.00th=[15270], 40.00th=[15533], 50.00th=[16057], 60.00th=[17171], 00:11:22.860 | 70.00th=[20055], 80.00th=[22938], 90.00th=[27919], 95.00th=[34341], 00:11:22.860 | 99.00th=[68682], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:11:22.860 | 99.99th=[69731] 00:11:22.860 write: IOPS=2939, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1045msec); 0 zone resets 00:11:22.860 slat (usec): min=3, max=26178, avg=184.18, stdev=1260.44 00:11:22.860 clat (usec): min=5331, max=99816, avg=23458.18, stdev=14582.68 00:11:22.860 lat (usec): min=5335, max=99826, avg=23642.36, stdev=14675.61 00:11:22.860 clat percentiles (msec): 00:11:22.860 | 1.00th=[ 10], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 15], 00:11:22.860 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 20], 60.00th=[ 24], 00:11:22.860 | 70.00th=[ 25], 80.00th=[ 28], 90.00th=[ 35], 95.00th=[ 47], 00:11:22.860 | 99.00th=[ 97], 99.50th=[ 99], 99.90th=[ 101], 99.95th=[ 101], 00:11:22.860 | 99.99th=[ 101] 00:11:22.860 bw ( KiB/s): min=12288, max=12288, per=19.61%, avg=12288.00, stdev= 0.00, samples=2 00:11:22.860 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:11:22.860 lat (msec) : 10=1.37%, 20=60.06%, 50=34.13%, 100=4.44% 00:11:22.860 cpu : usr=3.16%, sys=4.60%, ctx=221, majf=0, minf=1 00:11:22.860 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:22.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.860 issued rwts: total=3004,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.860 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.860 job3: (groupid=0, jobs=1): err= 0: pid=1273103: Tue Nov 19 10:39:10 2024 00:11:22.860 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:11:22.860 slat (usec): min=2, max=13937, avg=108.80, stdev=724.20 00:11:22.860 clat (usec): min=7300, max=42009, avg=14237.56, stdev=4573.95 00:11:22.860 lat (usec): min=7306, max=42021, avg=14346.36, stdev=4638.35 00:11:22.860 clat percentiles (usec): 00:11:22.860 | 1.00th=[ 8160], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[12256], 00:11:22.860 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12911], 60.00th=[13173], 00:11:22.860 | 70.00th=[13566], 80.00th=[15270], 90.00th=[19268], 95.00th=[28181], 00:11:22.860 | 99.00th=[30540], 99.50th=[30540], 99.90th=[33162], 99.95th=[38011], 00:11:22.860 | 99.99th=[42206] 00:11:22.860 write: IOPS=4559, BW=17.8MiB/s (18.7MB/s)(17.9MiB/1007msec); 0 zone resets 00:11:22.860 slat (usec): min=3, max=11185, avg=112.69, stdev=608.50 00:11:22.860 clat (usec): min=5940, max=61013, avg=15052.86, stdev=7315.03 00:11:22.860 lat (usec): min=6523, max=61026, avg=15165.55, stdev=7365.66 00:11:22.860 clat percentiles (usec): 00:11:22.860 | 1.00th=[ 7898], 5.00th=[10552], 10.00th=[11338], 20.00th=[11994], 00:11:22.860 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12911], 60.00th=[13042], 00:11:22.860 | 70.00th=[13304], 80.00th=[14877], 90.00th=[21365], 95.00th=[30540], 00:11:22.860 | 99.00th=[51643], 99.50th=[55313], 99.90th=[61080], 99.95th=[61080], 00:11:22.860 | 99.99th=[61080] 00:11:22.860 bw ( KiB/s): min=15232, max=20480, per=28.50%, avg=17856.00, stdev=3710.90, samples=2 00:11:22.860 iops : min= 3808, max= 5120, avg=4464.00, stdev=927.72, samples=2 00:11:22.860 lat (msec) : 10=5.32%, 20=84.59%, 50=9.55%, 100=0.54% 00:11:22.860 cpu : usr=4.17%, sys=8.45%, ctx=451, majf=0, minf=1 00:11:22.860 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:22.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.860 issued rwts: total=4096,4591,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.860 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.860 00:11:22.860 Run status group 0 (all jobs): 00:11:22.860 READ: bw=56.9MiB/s (59.7MB/s), 11.2MiB/s-18.0MiB/s (11.8MB/s-18.8MB/s), io=59.5MiB (62.4MB), run=1007-1045msec 00:11:22.860 WRITE: bw=61.2MiB/s (64.2MB/s), 11.5MiB/s-19.2MiB/s (12.0MB/s-20.1MB/s), io=63.9MiB (67.0MB), run=1007-1045msec 00:11:22.860 00:11:22.860 Disk stats (read/write): 00:11:22.860 nvme0n1: ios=2610/2654, merge=0/0, ticks=40062/64867, in_queue=104929, util=86.37% 00:11:22.860 nvme0n2: ios=4148/4463, merge=0/0, ticks=50270/49245, in_queue=99515, util=97.66% 00:11:22.860 nvme0n3: ios=2560/2612, merge=0/0, ticks=20233/28612, in_queue=48845, util=88.91% 00:11:22.860 nvme0n4: ios=3731/4096, merge=0/0, ticks=26748/25138, in_queue=51886, util=97.57% 00:11:22.860 10:39:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:22.860 [global] 00:11:22.860 thread=1 00:11:22.860 invalidate=1 00:11:22.860 rw=randwrite 00:11:22.860 time_based=1 00:11:22.860 runtime=1 00:11:22.860 ioengine=libaio 00:11:22.860 direct=1 00:11:22.861 bs=4096 00:11:22.861 iodepth=128 00:11:22.861 norandommap=0 00:11:22.861 numjobs=1 00:11:22.861 00:11:22.861 verify_dump=1 00:11:22.861 verify_backlog=512 00:11:22.861 verify_state_save=0 00:11:22.861 do_verify=1 00:11:22.861 verify=crc32c-intel 00:11:22.861 [job0] 00:11:22.861 filename=/dev/nvme0n1 00:11:22.861 [job1] 00:11:22.861 filename=/dev/nvme0n2 00:11:22.861 [job2] 00:11:22.861 filename=/dev/nvme0n3 00:11:22.861 [job3] 00:11:22.861 filename=/dev/nvme0n4 00:11:22.861 Could not set queue depth (nvme0n1) 00:11:22.861 Could not set queue depth (nvme0n2) 00:11:22.861 Could not set queue depth (nvme0n3) 00:11:22.861 Could not set queue depth (nvme0n4) 00:11:23.118 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:23.118 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:23.118 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:23.118 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:23.118 fio-3.35 00:11:23.118 Starting 4 threads 00:11:24.490 00:11:24.491 job0: (groupid=0, jobs=1): err= 0: pid=1273341: Tue Nov 19 10:39:11 2024 00:11:24.491 read: IOPS=3086, BW=12.1MiB/s (12.6MB/s)(12.1MiB/1002msec) 00:11:24.491 slat (usec): min=2, max=22432, avg=133.05, stdev=912.10 00:11:24.491 clat (usec): min=635, max=73416, avg=16993.53, stdev=12028.07 00:11:24.491 lat (usec): min=3701, max=85385, avg=17126.58, stdev=12134.26 00:11:24.491 clat percentiles (usec): 00:11:24.491 | 1.00th=[ 7308], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10814], 00:11:24.491 | 30.00th=[11207], 40.00th=[11600], 50.00th=[12125], 60.00th=[15795], 00:11:24.491 | 70.00th=[16319], 80.00th=[17171], 90.00th=[31589], 95.00th=[52691], 00:11:24.491 | 99.00th=[68682], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:11:24.491 | 99.99th=[73925] 00:11:24.491 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:11:24.491 slat (usec): min=4, max=20466, avg=156.54, stdev=1026.04 00:11:24.491 clat (usec): min=3867, max=77869, avg=20610.06, stdev=15260.40 00:11:24.491 lat (usec): min=3874, max=77890, avg=20766.60, stdev=15359.19 00:11:24.491 clat percentiles (usec): 00:11:24.491 | 1.00th=[ 7439], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10814], 00:11:24.491 | 30.00th=[11207], 40.00th=[11863], 50.00th=[13829], 60.00th=[16319], 00:11:24.491 | 70.00th=[19792], 80.00th=[28181], 90.00th=[43254], 95.00th=[63177], 00:11:24.491 | 99.00th=[67634], 99.50th=[67634], 99.90th=[78119], 99.95th=[78119], 00:11:24.491 | 99.99th=[78119] 00:11:24.491 bw ( KiB/s): min=12800, max=15016, per=23.65%, avg=13908.00, stdev=1566.95, samples=2 00:11:24.491 iops : min= 3200, max= 3754, avg=3477.00, stdev=391.74, samples=2 00:11:24.491 lat (usec) : 750=0.01% 00:11:24.491 lat (msec) : 4=0.49%, 10=10.81%, 20=66.57%, 50=15.59%, 100=6.51% 00:11:24.491 cpu : usr=4.10%, sys=5.09%, ctx=353, majf=0, minf=1 00:11:24.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:24.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.491 issued rwts: total=3093,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.491 job1: (groupid=0, jobs=1): err= 0: pid=1273342: Tue Nov 19 10:39:11 2024 00:11:24.491 read: IOPS=4206, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1006msec) 00:11:24.491 slat (usec): min=2, max=21929, avg=118.87, stdev=722.82 00:11:24.491 clat (usec): min=502, max=46878, avg=15108.18, stdev=7071.99 00:11:24.491 lat (usec): min=5901, max=46886, avg=15227.05, stdev=7087.35 00:11:24.491 clat percentiles (usec): 00:11:24.491 | 1.00th=[ 7767], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10814], 00:11:24.491 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11863], 00:11:24.491 | 70.00th=[19268], 80.00th=[21365], 90.00th=[22676], 95.00th=[28967], 00:11:24.491 | 99.00th=[44303], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:11:24.491 | 99.99th=[46924] 00:11:24.491 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:11:24.491 slat (usec): min=3, max=8714, avg=101.59, stdev=551.42 00:11:24.491 clat (usec): min=7401, max=25484, avg=13581.40, stdev=4744.07 00:11:24.491 lat (usec): min=7406, max=25488, avg=13682.99, stdev=4752.52 00:11:24.491 clat percentiles (usec): 00:11:24.491 | 1.00th=[ 7767], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9896], 00:11:24.491 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11469], 60.00th=[12256], 00:11:24.491 | 70.00th=[14222], 80.00th=[17957], 90.00th=[22152], 95.00th=[23725], 00:11:24.491 | 99.00th=[25297], 99.50th=[25560], 99.90th=[25560], 99.95th=[25560], 00:11:24.491 | 99.99th=[25560] 00:11:24.491 bw ( KiB/s): min=16384, max=20480, per=31.34%, avg=18432.00, stdev=2896.31, samples=2 00:11:24.491 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:11:24.491 lat (usec) : 750=0.01% 00:11:24.491 lat (msec) : 10=18.13%, 20=61.38%, 50=20.48% 00:11:24.491 cpu : usr=3.18%, sys=6.97%, ctx=411, majf=0, minf=1 00:11:24.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:24.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.491 issued rwts: total=4232,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.491 job2: (groupid=0, jobs=1): err= 0: pid=1273343: Tue Nov 19 10:39:11 2024 00:11:24.491 read: IOPS=3166, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1010msec) 00:11:24.491 slat (usec): min=3, max=14383, avg=119.60, stdev=868.55 00:11:24.491 clat (usec): min=410, max=46426, avg=15241.11, stdev=5180.94 00:11:24.491 lat (usec): min=422, max=46434, avg=15360.72, stdev=5234.79 00:11:24.491 clat percentiles (usec): 00:11:24.491 | 1.00th=[ 2008], 5.00th=[ 5538], 10.00th=[10421], 20.00th=[11994], 00:11:24.491 | 30.00th=[14222], 40.00th=[15270], 50.00th=[15533], 60.00th=[15664], 00:11:24.491 | 70.00th=[16057], 80.00th=[16909], 90.00th=[20579], 95.00th=[23200], 00:11:24.491 | 99.00th=[30540], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:11:24.491 | 99.99th=[46400] 00:11:24.491 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:11:24.491 slat (usec): min=5, max=11923, avg=160.70, stdev=854.59 00:11:24.491 clat (usec): min=4439, max=50270, avg=22135.85, stdev=11039.16 00:11:24.491 lat (usec): min=4448, max=50307, avg=22296.55, stdev=11124.21 00:11:24.491 clat percentiles (usec): 00:11:24.491 | 1.00th=[ 6783], 5.00th=[10683], 10.00th=[12125], 20.00th=[13698], 00:11:24.491 | 30.00th=[13960], 40.00th=[15533], 50.00th=[16188], 60.00th=[21890], 00:11:24.491 | 70.00th=[25560], 80.00th=[34866], 90.00th=[40633], 95.00th=[43254], 00:11:24.491 | 99.00th=[47973], 99.50th=[48497], 99.90th=[50070], 99.95th=[50070], 00:11:24.491 | 99.99th=[50070] 00:11:24.491 bw ( KiB/s): min=14328, max=14328, per=24.37%, avg=14328.00, stdev= 0.00, samples=2 00:11:24.491 iops : min= 3582, max= 3582, avg=3582.00, stdev= 0.00, samples=2 00:11:24.491 lat (usec) : 500=0.01% 00:11:24.491 lat (msec) : 2=0.37%, 4=1.40%, 10=3.79%, 20=64.45%, 50=29.87% 00:11:24.491 lat (msec) : 100=0.10% 00:11:24.491 cpu : usr=5.55%, sys=8.13%, ctx=332, majf=0, minf=1 00:11:24.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:24.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.491 issued rwts: total=3198,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.491 job3: (groupid=0, jobs=1): err= 0: pid=1273344: Tue Nov 19 10:39:11 2024 00:11:24.491 read: IOPS=2774, BW=10.8MiB/s (11.4MB/s)(10.9MiB/1006msec) 00:11:24.491 slat (usec): min=2, max=11618, avg=118.52, stdev=780.40 00:11:24.491 clat (usec): min=3069, max=31443, avg=14885.30, stdev=3608.25 00:11:24.491 lat (usec): min=5832, max=31466, avg=15003.82, stdev=3683.04 00:11:24.491 clat percentiles (usec): 00:11:24.491 | 1.00th=[ 6128], 5.00th=[11076], 10.00th=[12256], 20.00th=[12518], 00:11:24.491 | 30.00th=[12649], 40.00th=[13173], 50.00th=[13829], 60.00th=[14877], 00:11:24.491 | 70.00th=[15270], 80.00th=[17957], 90.00th=[19006], 95.00th=[22676], 00:11:24.491 | 99.00th=[25822], 99.50th=[27657], 99.90th=[28181], 99.95th=[28443], 00:11:24.491 | 99.99th=[31327] 00:11:24.491 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:11:24.491 slat (usec): min=3, max=23170, avg=207.57, stdev=1218.79 00:11:24.491 clat (msec): min=4, max=125, avg=27.26, stdev=22.68 00:11:24.491 lat (msec): min=4, max=125, avg=27.47, stdev=22.83 00:11:24.491 clat percentiles (msec): 00:11:24.491 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 12], 20.00th=[ 13], 00:11:24.491 | 30.00th=[ 14], 40.00th=[ 19], 50.00th=[ 20], 60.00th=[ 23], 00:11:24.491 | 70.00th=[ 29], 80.00th=[ 36], 90.00th=[ 54], 95.00th=[ 69], 00:11:24.491 | 99.00th=[ 120], 99.50th=[ 124], 99.90th=[ 126], 99.95th=[ 126], 00:11:24.491 | 99.99th=[ 126] 00:11:24.491 bw ( KiB/s): min= 9656, max=14920, per=20.90%, avg=12288.00, stdev=3722.21, samples=2 00:11:24.491 iops : min= 2414, max= 3730, avg=3072.00, stdev=930.55, samples=2 00:11:24.491 lat (msec) : 4=0.02%, 10=5.01%, 20=65.65%, 50=23.13%, 100=4.30% 00:11:24.491 lat (msec) : 250=1.89% 00:11:24.491 cpu : usr=3.28%, sys=6.47%, ctx=279, majf=0, minf=1 00:11:24.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:11:24.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.491 issued rwts: total=2791,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.491 00:11:24.491 Run status group 0 (all jobs): 00:11:24.491 READ: bw=51.5MiB/s (54.0MB/s), 10.8MiB/s-16.4MiB/s (11.4MB/s-17.2MB/s), io=52.0MiB (54.5MB), run=1002-1010msec 00:11:24.491 WRITE: bw=57.4MiB/s (60.2MB/s), 11.9MiB/s-17.9MiB/s (12.5MB/s-18.8MB/s), io=58.0MiB (60.8MB), run=1002-1010msec 00:11:24.491 00:11:24.491 Disk stats (read/write): 00:11:24.491 nvme0n1: ios=2585/2820, merge=0/0, ticks=15716/17708, in_queue=33424, util=93.69% 00:11:24.491 nvme0n2: ios=3635/3605, merge=0/0, ticks=14275/11987, in_queue=26262, util=97.76% 00:11:24.491 nvme0n3: ios=2738/3072, merge=0/0, ticks=39057/64467, in_queue=103524, util=97.70% 00:11:24.491 nvme0n4: ios=2188/2560, merge=0/0, ticks=17766/36461, in_queue=54227, util=97.68% 00:11:24.491 10:39:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:24.491 10:39:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1273489 00:11:24.491 10:39:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:24.491 10:39:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:24.491 [global] 00:11:24.491 thread=1 00:11:24.491 invalidate=1 00:11:24.491 rw=read 00:11:24.491 time_based=1 00:11:24.491 runtime=10 00:11:24.491 ioengine=libaio 00:11:24.491 direct=1 00:11:24.491 bs=4096 00:11:24.491 iodepth=1 00:11:24.491 norandommap=1 00:11:24.491 numjobs=1 00:11:24.491 00:11:24.491 [job0] 00:11:24.491 filename=/dev/nvme0n1 00:11:24.491 [job1] 00:11:24.491 filename=/dev/nvme0n2 00:11:24.491 [job2] 00:11:24.491 filename=/dev/nvme0n3 00:11:24.491 [job3] 00:11:24.491 filename=/dev/nvme0n4 00:11:24.491 Could not set queue depth (nvme0n1) 00:11:24.491 Could not set queue depth (nvme0n2) 00:11:24.491 Could not set queue depth (nvme0n3) 00:11:24.491 Could not set queue depth (nvme0n4) 00:11:24.491 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:24.491 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:24.491 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:24.491 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:24.491 fio-3.35 00:11:24.491 Starting 4 threads 00:11:27.768 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:27.768 10:39:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:27.768 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=339968, buflen=4096 00:11:27.768 fio: pid=1273580, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:28.026 10:39:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:28.026 10:39:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:28.026 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=39006208, buflen=4096 00:11:28.026 fio: pid=1273579, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:28.285 10:39:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:28.285 10:39:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:28.285 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10514432, buflen=4096 00:11:28.285 fio: pid=1273577, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:28.543 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=60739584, buflen=4096 00:11:28.543 fio: pid=1273578, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:28.543 10:39:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:28.543 10:39:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:28.543 00:11:28.543 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1273577: Tue Nov 19 10:39:16 2024 00:11:28.543 read: IOPS=729, BW=2915KiB/s (2985kB/s)(10.0MiB/3522msec) 00:11:28.543 slat (usec): min=4, max=33931, avg=38.42, stdev=779.63 00:11:28.543 clat (usec): min=198, max=42145, avg=1321.09, stdev=6380.56 00:11:28.543 lat (usec): min=204, max=42159, avg=1354.49, stdev=6420.96 00:11:28.543 clat percentiles (usec): 00:11:28.543 | 1.00th=[ 208], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 237], 00:11:28.543 | 30.00th=[ 260], 40.00th=[ 289], 50.00th=[ 306], 60.00th=[ 322], 00:11:28.543 | 70.00th=[ 343], 80.00th=[ 371], 90.00th=[ 429], 95.00th=[ 490], 00:11:28.543 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:28.543 | 99.99th=[42206] 00:11:28.544 bw ( KiB/s): min= 104, max= 7648, per=11.23%, avg=3221.33, stdev=2793.48, samples=6 00:11:28.544 iops : min= 26, max= 1912, avg=805.33, stdev=698.37, samples=6 00:11:28.544 lat (usec) : 250=27.76%, 500=68.26%, 750=1.48% 00:11:28.544 lat (msec) : 50=2.45% 00:11:28.544 cpu : usr=0.43%, sys=1.19%, ctx=2574, majf=0, minf=2 00:11:28.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.544 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.544 issued rwts: total=2568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.544 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1273578: Tue Nov 19 10:39:16 2024 00:11:28.544 read: IOPS=3936, BW=15.4MiB/s (16.1MB/s)(57.9MiB/3767msec) 00:11:28.544 slat (usec): min=4, max=29995, avg=15.19, stdev=343.48 00:11:28.544 clat (usec): min=165, max=41122, avg=235.95, stdev=821.40 00:11:28.544 lat (usec): min=169, max=53963, avg=251.13, stdev=929.44 00:11:28.544 clat percentiles (usec): 00:11:28.544 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 196], 00:11:28.544 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 00:11:28.544 | 70.00th=[ 221], 80.00th=[ 231], 90.00th=[ 258], 95.00th=[ 289], 00:11:28.544 | 99.00th=[ 375], 99.50th=[ 453], 99.90th=[ 553], 99.95th=[ 2540], 00:11:28.544 | 99.99th=[41157] 00:11:28.544 bw ( KiB/s): min=11768, max=17904, per=54.67%, avg=15675.71, stdev=2276.07, samples=7 00:11:28.544 iops : min= 2942, max= 4476, avg=3918.86, stdev=569.05, samples=7 00:11:28.544 lat (usec) : 250=88.23%, 500=11.58%, 750=0.13%, 1000=0.01% 00:11:28.544 lat (msec) : 4=0.01%, 10=0.01%, 50=0.04% 00:11:28.544 cpu : usr=1.62%, sys=4.83%, ctx=14837, majf=0, minf=1 00:11:28.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.544 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.544 issued rwts: total=14830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.544 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1273579: Tue Nov 19 10:39:16 2024 00:11:28.544 read: IOPS=2940, BW=11.5MiB/s (12.0MB/s)(37.2MiB/3239msec) 00:11:28.544 slat (usec): min=5, max=9871, avg=15.57, stdev=101.39 00:11:28.544 clat (usec): min=168, max=41996, avg=318.70, stdev=1179.21 00:11:28.544 lat (usec): min=174, max=50920, avg=334.26, stdev=1218.95 00:11:28.544 clat percentiles (usec): 00:11:28.544 | 1.00th=[ 184], 5.00th=[ 194], 10.00th=[ 210], 20.00th=[ 233], 00:11:28.544 | 30.00th=[ 251], 40.00th=[ 273], 50.00th=[ 289], 60.00th=[ 302], 00:11:28.544 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 343], 95.00th=[ 379], 00:11:28.544 | 99.00th=[ 482], 99.50th=[ 506], 99.90th=[ 627], 99.95th=[41157], 00:11:28.544 | 99.99th=[42206] 00:11:28.544 bw ( KiB/s): min=10176, max=14296, per=44.26%, avg=12689.33, stdev=1445.85, samples=6 00:11:28.544 iops : min= 2544, max= 3574, avg=3172.33, stdev=361.46, samples=6 00:11:28.544 lat (usec) : 250=29.38%, 500=69.99%, 750=0.52% 00:11:28.544 lat (msec) : 2=0.01%, 50=0.08% 00:11:28.544 cpu : usr=2.63%, sys=5.10%, ctx=9527, majf=0, minf=1 00:11:28.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.544 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.544 issued rwts: total=9524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.544 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1273580: Tue Nov 19 10:39:16 2024 00:11:28.544 read: IOPS=28, BW=113KiB/s (116kB/s)(332KiB/2940msec) 00:11:28.544 slat (nsec): min=10295, max=37492, avg=21023.64, stdev=8871.71 00:11:28.544 clat (usec): min=366, max=41444, avg=35113.91, stdev=14338.67 00:11:28.544 lat (usec): min=381, max=41460, avg=35135.02, stdev=14340.21 00:11:28.544 clat percentiles (usec): 00:11:28.544 | 1.00th=[ 367], 5.00th=[ 429], 10.00th=[ 494], 20.00th=[40633], 00:11:28.544 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:28.544 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:28.544 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:28.544 | 99.99th=[41681] 00:11:28.544 bw ( KiB/s): min= 96, max= 184, per=0.40%, avg=115.20, stdev=38.62, samples=5 00:11:28.544 iops : min= 24, max= 46, avg=28.80, stdev= 9.65, samples=5 00:11:28.544 lat (usec) : 500=11.90%, 750=2.38% 00:11:28.544 lat (msec) : 50=84.52% 00:11:28.544 cpu : usr=0.14%, sys=0.00%, ctx=84, majf=0, minf=1 00:11:28.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.544 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.544 issued rwts: total=84,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.544 00:11:28.544 Run status group 0 (all jobs): 00:11:28.544 READ: bw=28.0MiB/s (29.4MB/s), 113KiB/s-15.4MiB/s (116kB/s-16.1MB/s), io=105MiB (111MB), run=2940-3767msec 00:11:28.544 00:11:28.544 Disk stats (read/write): 00:11:28.544 nvme0n1: ios=2563/0, merge=0/0, ticks=3206/0, in_queue=3206, util=94.65% 00:11:28.544 nvme0n2: ios=14096/0, merge=0/0, ticks=3260/0, in_queue=3260, util=94.59% 00:11:28.544 nvme0n3: ios=9567/0, merge=0/0, ticks=3106/0, in_queue=3106, util=98.75% 00:11:28.544 nvme0n4: ios=81/0, merge=0/0, ticks=2835/0, in_queue=2835, util=96.75% 00:11:28.802 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:28.802 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:29.060 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:29.060 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:29.318 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:29.318 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:29.577 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:29.577 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:29.836 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:29.836 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1273489 00:11:29.836 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:29.836 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:30.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.094 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:30.094 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:30.094 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:30.094 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.094 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:30.094 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.094 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:30.094 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:30.094 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:30.094 nvmf hotplug test: fio failed as expected 00:11:30.094 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:30.352 rmmod nvme_tcp 00:11:30.352 rmmod nvme_fabrics 00:11:30.352 rmmod nvme_keyring 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1270821 ']' 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1270821 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1270821 ']' 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1270821 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1270821 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1270821' 00:11:30.352 killing process with pid 1270821 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1270821 00:11:30.352 10:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1270821 00:11:30.612 10:39:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:30.612 10:39:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:30.612 10:39:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:30.612 10:39:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:30.612 10:39:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:30.612 10:39:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:30.612 10:39:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:30.612 10:39:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:30.612 10:39:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:30.612 10:39:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.612 10:39:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.612 10:39:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:33.154 00:11:33.154 real 0m24.613s 00:11:33.154 user 1m26.345s 00:11:33.154 sys 0m7.132s 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.154 ************************************ 00:11:33.154 END TEST nvmf_fio_target 00:11:33.154 ************************************ 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:33.154 ************************************ 00:11:33.154 START TEST nvmf_bdevio 00:11:33.154 ************************************ 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:33.154 * Looking for test storage... 00:11:33.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:33.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.154 --rc genhtml_branch_coverage=1 00:11:33.154 --rc genhtml_function_coverage=1 00:11:33.154 --rc genhtml_legend=1 00:11:33.154 --rc geninfo_all_blocks=1 00:11:33.154 --rc geninfo_unexecuted_blocks=1 00:11:33.154 00:11:33.154 ' 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:33.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.154 --rc genhtml_branch_coverage=1 00:11:33.154 --rc genhtml_function_coverage=1 00:11:33.154 --rc genhtml_legend=1 00:11:33.154 --rc geninfo_all_blocks=1 00:11:33.154 --rc geninfo_unexecuted_blocks=1 00:11:33.154 00:11:33.154 ' 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:33.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.154 --rc genhtml_branch_coverage=1 00:11:33.154 --rc genhtml_function_coverage=1 00:11:33.154 --rc genhtml_legend=1 00:11:33.154 --rc geninfo_all_blocks=1 00:11:33.154 --rc geninfo_unexecuted_blocks=1 00:11:33.154 00:11:33.154 ' 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:33.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.154 --rc genhtml_branch_coverage=1 00:11:33.154 --rc genhtml_function_coverage=1 00:11:33.154 --rc genhtml_legend=1 00:11:33.154 --rc geninfo_all_blocks=1 00:11:33.154 --rc geninfo_unexecuted_blocks=1 00:11:33.154 00:11:33.154 ' 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.154 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:33.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:33.155 10:39:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:35.059 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:35.059 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:35.060 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:35.060 Found net devices under 0000:09:00.0: cvl_0_0 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:35.060 Found net devices under 0000:09:00.1: cvl_0_1 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.060 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:35.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:11:35.319 00:11:35.319 --- 10.0.0.2 ping statistics --- 00:11:35.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.319 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:11:35.319 00:11:35.319 --- 10.0.0.1 ping statistics --- 00:11:35.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.319 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1276338 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1276338 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1276338 ']' 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.319 10:39:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:35.319 [2024-11-19 10:39:22.850037] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:11:35.319 [2024-11-19 10:39:22.850124] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.319 [2024-11-19 10:39:22.926635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.576 [2024-11-19 10:39:22.989175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.576 [2024-11-19 10:39:22.989228] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.576 [2024-11-19 10:39:22.989249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.576 [2024-11-19 10:39:22.989266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.576 [2024-11-19 10:39:22.989295] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.576 [2024-11-19 10:39:22.991085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:35.576 [2024-11-19 10:39:22.991112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:35.576 [2024-11-19 10:39:22.991172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:35.576 [2024-11-19 10:39:22.991175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.576 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.576 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:35.576 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:35.576 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:35.576 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:35.576 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.576 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:35.576 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.576 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:35.576 [2024-11-19 10:39:23.144259] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.576 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.576 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:35.576 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.576 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:35.834 Malloc0 00:11:35.834 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.834 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:35.834 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.834 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:35.834 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.834 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:35.834 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.834 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:35.834 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.834 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.834 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.834 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:35.834 [2024-11-19 10:39:23.220649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.834 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.834 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:35.834 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:35.834 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:35.834 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:35.834 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:35.834 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:35.834 { 00:11:35.834 "params": { 00:11:35.834 "name": "Nvme$subsystem", 00:11:35.834 "trtype": "$TEST_TRANSPORT", 00:11:35.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:35.834 "adrfam": "ipv4", 00:11:35.834 "trsvcid": "$NVMF_PORT", 00:11:35.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:35.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:35.834 "hdgst": ${hdgst:-false}, 00:11:35.834 "ddgst": ${ddgst:-false} 00:11:35.834 }, 00:11:35.834 "method": "bdev_nvme_attach_controller" 00:11:35.834 } 00:11:35.834 EOF 00:11:35.834 )") 00:11:35.834 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:35.834 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:35.834 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:35.834 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:35.834 "params": { 00:11:35.834 "name": "Nvme1", 00:11:35.834 "trtype": "tcp", 00:11:35.834 "traddr": "10.0.0.2", 00:11:35.834 "adrfam": "ipv4", 00:11:35.834 "trsvcid": "4420", 00:11:35.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:35.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:35.834 "hdgst": false, 00:11:35.834 "ddgst": false 00:11:35.834 }, 00:11:35.834 "method": "bdev_nvme_attach_controller" 00:11:35.834 }' 00:11:35.834 [2024-11-19 10:39:23.271807] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:11:35.834 [2024-11-19 10:39:23.271882] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1276372 ] 00:11:35.834 [2024-11-19 10:39:23.340961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:35.834 [2024-11-19 10:39:23.406943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.834 [2024-11-19 10:39:23.406995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.834 [2024-11-19 10:39:23.406999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.092 I/O targets: 00:11:36.092 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:36.092 00:11:36.092 00:11:36.092 CUnit - A unit testing framework for C - Version 2.1-3 00:11:36.092 http://cunit.sourceforge.net/ 00:11:36.092 00:11:36.092 00:11:36.092 Suite: bdevio tests on: Nvme1n1 00:11:36.348 Test: blockdev write read block ...passed 00:11:36.348 Test: blockdev write zeroes read block ...passed 00:11:36.348 Test: blockdev write zeroes read no split ...passed 00:11:36.348 Test: blockdev write zeroes read split ...passed 00:11:36.348 Test: blockdev write zeroes read split partial ...passed 00:11:36.348 Test: blockdev reset ...[2024-11-19 10:39:23.785049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:36.348 [2024-11-19 10:39:23.785152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10dc640 (9): Bad file descriptor 00:11:36.348 [2024-11-19 10:39:23.922800] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:36.348 passed 00:11:36.348 Test: blockdev write read 8 blocks ...passed 00:11:36.605 Test: blockdev write read size > 128k ...passed 00:11:36.605 Test: blockdev write read invalid size ...passed 00:11:36.605 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:36.605 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:36.606 Test: blockdev write read max offset ...passed 00:11:36.606 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:36.606 Test: blockdev writev readv 8 blocks ...passed 00:11:36.606 Test: blockdev writev readv 30 x 1block ...passed 00:11:36.865 Test: blockdev writev readv block ...passed 00:11:36.865 Test: blockdev writev readv size > 128k ...passed 00:11:36.865 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:36.865 Test: blockdev comparev and writev ...[2024-11-19 10:39:24.258541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:36.865 [2024-11-19 10:39:24.258586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:36.865 [2024-11-19 10:39:24.258612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:36.865 [2024-11-19 10:39:24.258629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:36.865 [2024-11-19 10:39:24.258957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:36.865 [2024-11-19 10:39:24.258982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:36.865 [2024-11-19 10:39:24.259003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:36.865 [2024-11-19 10:39:24.259018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:36.865 [2024-11-19 10:39:24.259356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:36.865 [2024-11-19 10:39:24.259381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:36.865 [2024-11-19 10:39:24.259402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:36.865 [2024-11-19 10:39:24.259418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:36.865 [2024-11-19 10:39:24.259773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:36.865 [2024-11-19 10:39:24.259797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:36.865 [2024-11-19 10:39:24.259817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:36.865 [2024-11-19 10:39:24.259833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:36.865 passed 00:11:36.865 Test: blockdev nvme passthru rw ...passed 00:11:36.865 Test: blockdev nvme passthru vendor specific ...[2024-11-19 10:39:24.342580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:36.865 [2024-11-19 10:39:24.342608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:36.865 [2024-11-19 10:39:24.342750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:36.865 [2024-11-19 10:39:24.342772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:36.865 [2024-11-19 10:39:24.342914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:36.865 [2024-11-19 10:39:24.342935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:36.865 [2024-11-19 10:39:24.343078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:36.865 [2024-11-19 10:39:24.343100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:36.865 passed 00:11:36.865 Test: blockdev nvme admin passthru ...passed 00:11:36.865 Test: blockdev copy ...passed 00:11:36.865 00:11:36.865 Run Summary: Type Total Ran Passed Failed Inactive 00:11:36.865 suites 1 1 n/a 0 0 00:11:36.865 tests 23 23 23 0 0 00:11:36.865 asserts 152 152 152 0 n/a 00:11:36.865 00:11:36.865 Elapsed time = 1.451 seconds 00:11:37.123 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:37.123 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.123 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:37.123 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.123 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:37.123 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:37.123 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:37.123 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:37.123 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:37.123 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:37.123 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:37.123 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:37.123 rmmod nvme_tcp 00:11:37.123 rmmod nvme_fabrics 00:11:37.123 rmmod nvme_keyring 00:11:37.123 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:37.124 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:37.124 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:37.124 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1276338 ']' 00:11:37.124 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1276338 00:11:37.124 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1276338 ']' 00:11:37.124 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1276338 00:11:37.124 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:37.124 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:37.124 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1276338 00:11:37.124 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:37.124 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:37.124 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1276338' 00:11:37.124 killing process with pid 1276338 00:11:37.124 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1276338 00:11:37.124 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1276338 00:11:37.382 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:37.382 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:37.382 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:37.382 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:37.382 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:37.382 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:37.382 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:37.382 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:37.382 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:37.382 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.382 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.382 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:39.920 00:11:39.920 real 0m6.775s 00:11:39.920 user 0m11.273s 00:11:39.920 sys 0m2.327s 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:39.920 ************************************ 00:11:39.920 END TEST nvmf_bdevio 00:11:39.920 ************************************ 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:39.920 00:11:39.920 real 3m57.346s 00:11:39.920 user 10m20.750s 00:11:39.920 sys 1m7.593s 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:39.920 ************************************ 00:11:39.920 END TEST nvmf_target_core 00:11:39.920 ************************************ 00:11:39.920 10:39:27 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:39.920 10:39:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:39.920 10:39:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.920 10:39:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:39.920 ************************************ 00:11:39.920 START TEST nvmf_target_extra 00:11:39.920 ************************************ 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:39.920 * Looking for test storage... 00:11:39.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.920 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:39.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.921 --rc genhtml_branch_coverage=1 00:11:39.921 --rc genhtml_function_coverage=1 00:11:39.921 --rc genhtml_legend=1 00:11:39.921 --rc geninfo_all_blocks=1 00:11:39.921 --rc geninfo_unexecuted_blocks=1 00:11:39.921 00:11:39.921 ' 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:39.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.921 --rc genhtml_branch_coverage=1 00:11:39.921 --rc genhtml_function_coverage=1 00:11:39.921 --rc genhtml_legend=1 00:11:39.921 --rc geninfo_all_blocks=1 00:11:39.921 --rc geninfo_unexecuted_blocks=1 00:11:39.921 00:11:39.921 ' 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:39.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.921 --rc genhtml_branch_coverage=1 00:11:39.921 --rc genhtml_function_coverage=1 00:11:39.921 --rc genhtml_legend=1 00:11:39.921 --rc geninfo_all_blocks=1 00:11:39.921 --rc geninfo_unexecuted_blocks=1 00:11:39.921 00:11:39.921 ' 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:39.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.921 --rc genhtml_branch_coverage=1 00:11:39.921 --rc genhtml_function_coverage=1 00:11:39.921 --rc genhtml_legend=1 00:11:39.921 --rc geninfo_all_blocks=1 00:11:39.921 --rc geninfo_unexecuted_blocks=1 00:11:39.921 00:11:39.921 ' 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:39.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:39.921 ************************************ 00:11:39.921 START TEST nvmf_example 00:11:39.921 ************************************ 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:39.921 * Looking for test storage... 00:11:39.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:39.921 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:39.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.922 --rc genhtml_branch_coverage=1 00:11:39.922 --rc genhtml_function_coverage=1 00:11:39.922 --rc genhtml_legend=1 00:11:39.922 --rc geninfo_all_blocks=1 00:11:39.922 --rc geninfo_unexecuted_blocks=1 00:11:39.922 00:11:39.922 ' 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:39.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.922 --rc genhtml_branch_coverage=1 00:11:39.922 --rc genhtml_function_coverage=1 00:11:39.922 --rc genhtml_legend=1 00:11:39.922 --rc geninfo_all_blocks=1 00:11:39.922 --rc geninfo_unexecuted_blocks=1 00:11:39.922 00:11:39.922 ' 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:39.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.922 --rc genhtml_branch_coverage=1 00:11:39.922 --rc genhtml_function_coverage=1 00:11:39.922 --rc genhtml_legend=1 00:11:39.922 --rc geninfo_all_blocks=1 00:11:39.922 --rc geninfo_unexecuted_blocks=1 00:11:39.922 00:11:39.922 ' 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:39.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.922 --rc genhtml_branch_coverage=1 00:11:39.922 --rc genhtml_function_coverage=1 00:11:39.922 --rc genhtml_legend=1 00:11:39.922 --rc geninfo_all_blocks=1 00:11:39.922 --rc geninfo_unexecuted_blocks=1 00:11:39.922 00:11:39.922 ' 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:39.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:39.922 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:42.512 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:42.512 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:42.512 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:42.513 Found net devices under 0000:09:00.0: cvl_0_0 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:42.513 Found net devices under 0000:09:00.1: cvl_0_1 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:42.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:11:42.513 00:11:42.513 --- 10.0.0.2 ping statistics --- 00:11:42.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.513 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:42.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:11:42.513 00:11:42.513 --- 10.0.0.1 ping statistics --- 00:11:42.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.513 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1278629 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1278629 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1278629 ']' 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.513 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:43.446 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.446 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:43.446 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:43.446 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:43.446 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:43.446 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:43.446 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.446 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:43.446 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.446 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:43.446 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.446 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:43.446 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.446 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:43.446 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:43.446 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.446 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:43.447 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.447 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:43.447 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:43.447 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.447 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:43.447 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.447 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:43.447 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.447 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:43.447 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.447 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:43.447 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:55.645 Initializing NVMe Controllers 00:11:55.645 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:55.645 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:55.645 Initialization complete. Launching workers. 00:11:55.645 ======================================================== 00:11:55.645 Latency(us) 00:11:55.645 Device Information : IOPS MiB/s Average min max 00:11:55.645 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14899.30 58.20 4297.92 889.73 15268.86 00:11:55.645 ======================================================== 00:11:55.645 Total : 14899.30 58.20 4297.92 889.73 15268.86 00:11:55.645 00:11:55.645 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:55.645 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:55.645 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:55.645 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:55.645 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:55.645 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:55.645 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:55.645 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:55.645 rmmod nvme_tcp 00:11:55.645 rmmod nvme_fabrics 00:11:55.645 rmmod nvme_keyring 00:11:55.645 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:55.645 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:55.645 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:55.645 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1278629 ']' 00:11:55.645 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1278629 00:11:55.645 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1278629 ']' 00:11:55.645 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1278629 00:11:55.645 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:55.645 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.645 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1278629 00:11:55.645 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:55.645 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:55.645 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1278629' 00:11:55.645 killing process with pid 1278629 00:11:55.645 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1278629 00:11:55.645 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1278629 00:11:55.645 nvmf threads initialize successfully 00:11:55.646 bdev subsystem init successfully 00:11:55.646 created a nvmf target service 00:11:55.646 create targets's poll groups done 00:11:55.646 all subsystems of target started 00:11:55.646 nvmf target is running 00:11:55.646 all subsystems of target stopped 00:11:55.646 destroy targets's poll groups done 00:11:55.646 destroyed the nvmf target service 00:11:55.646 bdev subsystem finish successfully 00:11:55.646 nvmf threads destroy successfully 00:11:55.646 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:55.646 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:55.646 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:55.646 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:55.646 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:55.646 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:55.646 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:55.646 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:55.646 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:55.646 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.646 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.646 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:56.217 00:11:56.217 real 0m16.337s 00:11:56.217 user 0m45.884s 00:11:56.217 sys 0m3.441s 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:56.217 ************************************ 00:11:56.217 END TEST nvmf_example 00:11:56.217 ************************************ 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:56.217 ************************************ 00:11:56.217 START TEST nvmf_filesystem 00:11:56.217 ************************************ 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:56.217 * Looking for test storage... 00:11:56.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:56.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.217 --rc genhtml_branch_coverage=1 00:11:56.217 --rc genhtml_function_coverage=1 00:11:56.217 --rc genhtml_legend=1 00:11:56.217 --rc geninfo_all_blocks=1 00:11:56.217 --rc geninfo_unexecuted_blocks=1 00:11:56.217 00:11:56.217 ' 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:56.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.217 --rc genhtml_branch_coverage=1 00:11:56.217 --rc genhtml_function_coverage=1 00:11:56.217 --rc genhtml_legend=1 00:11:56.217 --rc geninfo_all_blocks=1 00:11:56.217 --rc geninfo_unexecuted_blocks=1 00:11:56.217 00:11:56.217 ' 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:56.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.217 --rc genhtml_branch_coverage=1 00:11:56.217 --rc genhtml_function_coverage=1 00:11:56.217 --rc genhtml_legend=1 00:11:56.217 --rc geninfo_all_blocks=1 00:11:56.217 --rc geninfo_unexecuted_blocks=1 00:11:56.217 00:11:56.217 ' 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:56.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.217 --rc genhtml_branch_coverage=1 00:11:56.217 --rc genhtml_function_coverage=1 00:11:56.217 --rc genhtml_legend=1 00:11:56.217 --rc geninfo_all_blocks=1 00:11:56.217 --rc geninfo_unexecuted_blocks=1 00:11:56.217 00:11:56.217 ' 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:56.217 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:56.218 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:56.218 #define SPDK_CONFIG_H 00:11:56.218 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:56.218 #define SPDK_CONFIG_APPS 1 00:11:56.218 #define SPDK_CONFIG_ARCH native 00:11:56.218 #undef SPDK_CONFIG_ASAN 00:11:56.218 #undef SPDK_CONFIG_AVAHI 00:11:56.218 #undef SPDK_CONFIG_CET 00:11:56.218 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:56.218 #define SPDK_CONFIG_COVERAGE 1 00:11:56.218 #define SPDK_CONFIG_CROSS_PREFIX 00:11:56.218 #undef SPDK_CONFIG_CRYPTO 00:11:56.218 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:56.218 #undef SPDK_CONFIG_CUSTOMOCF 00:11:56.218 #undef SPDK_CONFIG_DAOS 00:11:56.218 #define SPDK_CONFIG_DAOS_DIR 00:11:56.218 #define SPDK_CONFIG_DEBUG 1 00:11:56.218 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:56.218 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:56.218 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:56.218 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:56.218 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:56.218 #undef SPDK_CONFIG_DPDK_UADK 00:11:56.218 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:56.218 #define SPDK_CONFIG_EXAMPLES 1 00:11:56.218 #undef SPDK_CONFIG_FC 00:11:56.218 #define SPDK_CONFIG_FC_PATH 00:11:56.218 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:56.219 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:56.219 #define SPDK_CONFIG_FSDEV 1 00:11:56.219 #undef SPDK_CONFIG_FUSE 00:11:56.219 #undef SPDK_CONFIG_FUZZER 00:11:56.219 #define SPDK_CONFIG_FUZZER_LIB 00:11:56.219 #undef SPDK_CONFIG_GOLANG 00:11:56.219 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:56.219 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:56.219 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:56.219 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:56.219 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:56.219 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:56.219 #undef SPDK_CONFIG_HAVE_LZ4 00:11:56.219 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:56.219 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:56.219 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:56.219 #define SPDK_CONFIG_IDXD 1 00:11:56.219 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:56.219 #undef SPDK_CONFIG_IPSEC_MB 00:11:56.219 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:56.219 #define SPDK_CONFIG_ISAL 1 00:11:56.219 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:56.219 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:56.219 #define SPDK_CONFIG_LIBDIR 00:11:56.219 #undef SPDK_CONFIG_LTO 00:11:56.219 #define SPDK_CONFIG_MAX_LCORES 128 00:11:56.219 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:56.219 #define SPDK_CONFIG_NVME_CUSE 1 00:11:56.219 #undef SPDK_CONFIG_OCF 00:11:56.219 #define SPDK_CONFIG_OCF_PATH 00:11:56.219 #define SPDK_CONFIG_OPENSSL_PATH 00:11:56.219 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:56.219 #define SPDK_CONFIG_PGO_DIR 00:11:56.219 #undef SPDK_CONFIG_PGO_USE 00:11:56.219 #define SPDK_CONFIG_PREFIX /usr/local 00:11:56.219 #undef SPDK_CONFIG_RAID5F 00:11:56.219 #undef SPDK_CONFIG_RBD 00:11:56.219 #define SPDK_CONFIG_RDMA 1 00:11:56.219 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:56.219 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:56.219 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:56.219 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:56.219 #define SPDK_CONFIG_SHARED 1 00:11:56.219 #undef SPDK_CONFIG_SMA 00:11:56.219 #define SPDK_CONFIG_TESTS 1 00:11:56.219 #undef SPDK_CONFIG_TSAN 00:11:56.219 #define SPDK_CONFIG_UBLK 1 00:11:56.219 #define SPDK_CONFIG_UBSAN 1 00:11:56.219 #undef SPDK_CONFIG_UNIT_TESTS 00:11:56.219 #undef SPDK_CONFIG_URING 00:11:56.219 #define SPDK_CONFIG_URING_PATH 00:11:56.219 #undef SPDK_CONFIG_URING_ZNS 00:11:56.219 #undef SPDK_CONFIG_USDT 00:11:56.219 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:56.219 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:56.219 #define SPDK_CONFIG_VFIO_USER 1 00:11:56.219 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:56.219 #define SPDK_CONFIG_VHOST 1 00:11:56.219 #define SPDK_CONFIG_VIRTIO 1 00:11:56.219 #undef SPDK_CONFIG_VTUNE 00:11:56.219 #define SPDK_CONFIG_VTUNE_DIR 00:11:56.219 #define SPDK_CONFIG_WERROR 1 00:11:56.219 #define SPDK_CONFIG_WPDK_DIR 00:11:56.219 #undef SPDK_CONFIG_XNVME 00:11:56.219 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:56.219 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:56.219 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.219 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.219 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.482 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.482 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.482 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.482 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.482 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.482 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:56.482 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.482 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:56.482 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:56.482 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:56.482 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:56.482 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:56.482 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:56.482 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:56.482 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:56.482 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:56.482 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:56.482 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:56.482 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:56.483 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:56.484 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1280349 ]] 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1280349 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.h75Nc6 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:56.485 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.h75Nc6/tests/target /tmp/spdk.h75Nc6 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50870521856 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988519936 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11117998080 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30982893568 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994259968 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375261184 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22446080 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=29919772672 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994259968 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1074487296 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:56.486 * Looking for test storage... 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=50870521856 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13332590592 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:56.486 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:56.487 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:56.487 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:56.487 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:56.487 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:56.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.487 --rc genhtml_branch_coverage=1 00:11:56.487 --rc genhtml_function_coverage=1 00:11:56.487 --rc genhtml_legend=1 00:11:56.487 --rc geninfo_all_blocks=1 00:11:56.487 --rc geninfo_unexecuted_blocks=1 00:11:56.487 00:11:56.487 ' 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:56.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.487 --rc genhtml_branch_coverage=1 00:11:56.487 --rc genhtml_function_coverage=1 00:11:56.487 --rc genhtml_legend=1 00:11:56.487 --rc geninfo_all_blocks=1 00:11:56.487 --rc geninfo_unexecuted_blocks=1 00:11:56.487 00:11:56.487 ' 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:56.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.487 --rc genhtml_branch_coverage=1 00:11:56.487 --rc genhtml_function_coverage=1 00:11:56.487 --rc genhtml_legend=1 00:11:56.487 --rc geninfo_all_blocks=1 00:11:56.487 --rc geninfo_unexecuted_blocks=1 00:11:56.487 00:11:56.487 ' 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:56.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.487 --rc genhtml_branch_coverage=1 00:11:56.487 --rc genhtml_function_coverage=1 00:11:56.487 --rc genhtml_legend=1 00:11:56.487 --rc geninfo_all_blocks=1 00:11:56.487 --rc geninfo_unexecuted_blocks=1 00:11:56.487 00:11:56.487 ' 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.487 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:56.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:56.488 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:59.020 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:59.020 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:59.020 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:59.021 Found net devices under 0000:09:00.0: cvl_0_0 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:59.021 Found net devices under 0000:09:00.1: cvl_0_1 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:59.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:11:59.021 00:11:59.021 --- 10.0.0.2 ping statistics --- 00:11:59.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.021 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:11:59.021 00:11:59.021 --- 10.0.0.1 ping statistics --- 00:11:59.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.021 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:59.021 ************************************ 00:11:59.021 START TEST nvmf_filesystem_no_in_capsule 00:11:59.021 ************************************ 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1282117 00:11:59.021 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.022 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1282117 00:11:59.022 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1282117 ']' 00:11:59.022 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.022 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:59.022 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.022 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:59.022 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.022 [2024-11-19 10:39:46.600326] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:11:59.022 [2024-11-19 10:39:46.600421] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.280 [2024-11-19 10:39:46.672516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.280 [2024-11-19 10:39:46.730086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.280 [2024-11-19 10:39:46.730139] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.280 [2024-11-19 10:39:46.730165] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.280 [2024-11-19 10:39:46.730176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.280 [2024-11-19 10:39:46.730186] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.280 [2024-11-19 10:39:46.731728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.280 [2024-11-19 10:39:46.731786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.280 [2024-11-19 10:39:46.731849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.280 [2024-11-19 10:39:46.731852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.280 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.280 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:59.280 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:59.280 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:59.280 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.280 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.280 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:59.280 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:59.280 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.280 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.280 [2024-11-19 10:39:46.878939] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:59.280 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.280 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:59.280 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.280 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.538 Malloc1 00:11:59.538 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.538 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:59.538 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.538 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.538 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.538 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.538 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.538 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.538 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.538 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.538 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.539 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.539 [2024-11-19 10:39:47.081905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.539 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.539 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:59.539 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:59.539 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:59.539 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:59.539 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:59.539 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:59.539 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.539 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.539 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.539 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:59.539 { 00:11:59.539 "name": "Malloc1", 00:11:59.539 "aliases": [ 00:11:59.539 "24b6198c-3854-48cb-9845-8f649dff0a2d" 00:11:59.539 ], 00:11:59.539 "product_name": "Malloc disk", 00:11:59.539 "block_size": 512, 00:11:59.539 "num_blocks": 1048576, 00:11:59.539 "uuid": "24b6198c-3854-48cb-9845-8f649dff0a2d", 00:11:59.539 "assigned_rate_limits": { 00:11:59.539 "rw_ios_per_sec": 0, 00:11:59.539 "rw_mbytes_per_sec": 0, 00:11:59.539 "r_mbytes_per_sec": 0, 00:11:59.539 "w_mbytes_per_sec": 0 00:11:59.539 }, 00:11:59.539 "claimed": true, 00:11:59.539 "claim_type": "exclusive_write", 00:11:59.539 "zoned": false, 00:11:59.539 "supported_io_types": { 00:11:59.539 "read": true, 00:11:59.539 "write": true, 00:11:59.539 "unmap": true, 00:11:59.539 "flush": true, 00:11:59.539 "reset": true, 00:11:59.539 "nvme_admin": false, 00:11:59.539 "nvme_io": false, 00:11:59.539 "nvme_io_md": false, 00:11:59.539 "write_zeroes": true, 00:11:59.539 "zcopy": true, 00:11:59.539 "get_zone_info": false, 00:11:59.539 "zone_management": false, 00:11:59.539 "zone_append": false, 00:11:59.539 "compare": false, 00:11:59.539 "compare_and_write": false, 00:11:59.539 "abort": true, 00:11:59.539 "seek_hole": false, 00:11:59.539 "seek_data": false, 00:11:59.539 "copy": true, 00:11:59.539 "nvme_iov_md": false 00:11:59.539 }, 00:11:59.539 "memory_domains": [ 00:11:59.539 { 00:11:59.539 "dma_device_id": "system", 00:11:59.539 "dma_device_type": 1 00:11:59.539 }, 00:11:59.539 { 00:11:59.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.539 "dma_device_type": 2 00:11:59.539 } 00:11:59.539 ], 00:11:59.539 "driver_specific": {} 00:11:59.539 } 00:11:59.539 ]' 00:11:59.539 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:59.539 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:59.539 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:59.796 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:59.796 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:59.796 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:59.796 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:59.796 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:00.359 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:00.359 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:00.359 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.359 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:00.359 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:02.884 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:02.884 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:02.884 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.884 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:02.884 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.884 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:02.884 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:02.884 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:02.884 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:02.884 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:02.884 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:02.884 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:02.884 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:02.884 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:02.884 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:02.884 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:02.884 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:02.884 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:03.448 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:04.818 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:04.818 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:04.818 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:04.818 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.818 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.818 ************************************ 00:12:04.818 START TEST filesystem_ext4 00:12:04.818 ************************************ 00:12:04.818 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:04.818 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:04.818 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:04.818 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:04.818 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:04.818 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:04.818 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:04.818 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:04.818 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:04.818 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:04.818 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:04.818 mke2fs 1.47.0 (5-Feb-2023) 00:12:04.818 Discarding device blocks: 0/522240 done 00:12:04.818 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:04.818 Filesystem UUID: 57f5954c-68e5-4fff-ad9b-5a695b0a5c9b 00:12:04.818 Superblock backups stored on blocks: 00:12:04.818 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:04.818 00:12:04.818 Allocating group tables: 0/64 done 00:12:04.819 Writing inode tables: 0/64 done 00:12:05.076 Creating journal (8192 blocks): done 00:12:05.641 Writing superblocks and filesystem accounting information: 0/64 done 00:12:05.641 00:12:05.641 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:05.641 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:12.192 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:12.192 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:12.192 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:12.192 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:12.192 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:12.192 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:12.192 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1282117 00:12:12.192 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:12.192 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:12.192 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:12.192 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:12.192 00:12:12.192 real 0m6.604s 00:12:12.192 user 0m0.015s 00:12:12.192 sys 0m0.068s 00:12:12.192 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.192 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:12.192 ************************************ 00:12:12.192 END TEST filesystem_ext4 00:12:12.192 ************************************ 00:12:12.192 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:12.192 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:12.192 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.192 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.193 ************************************ 00:12:12.193 START TEST filesystem_btrfs 00:12:12.193 ************************************ 00:12:12.193 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:12.193 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:12.193 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:12.193 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:12.193 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:12.193 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:12.193 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:12.193 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:12.193 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:12.193 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:12.193 10:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:12.193 btrfs-progs v6.8.1 00:12:12.193 See https://btrfs.readthedocs.io for more information. 00:12:12.193 00:12:12.193 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:12.193 NOTE: several default settings have changed in version 5.15, please make sure 00:12:12.193 this does not affect your deployments: 00:12:12.193 - DUP for metadata (-m dup) 00:12:12.193 - enabled no-holes (-O no-holes) 00:12:12.193 - enabled free-space-tree (-R free-space-tree) 00:12:12.193 00:12:12.193 Label: (null) 00:12:12.193 UUID: e33fb63c-5826-4f16-a87c-7888befde578 00:12:12.193 Node size: 16384 00:12:12.193 Sector size: 4096 (CPU page size: 4096) 00:12:12.193 Filesystem size: 510.00MiB 00:12:12.193 Block group profiles: 00:12:12.193 Data: single 8.00MiB 00:12:12.193 Metadata: DUP 32.00MiB 00:12:12.193 System: DUP 8.00MiB 00:12:12.193 SSD detected: yes 00:12:12.193 Zoned device: no 00:12:12.193 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:12.193 Checksum: crc32c 00:12:12.193 Number of devices: 1 00:12:12.193 Devices: 00:12:12.193 ID SIZE PATH 00:12:12.193 1 510.00MiB /dev/nvme0n1p1 00:12:12.193 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1282117 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:12.193 00:12:12.193 real 0m0.743s 00:12:12.193 user 0m0.022s 00:12:12.193 sys 0m0.097s 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:12.193 ************************************ 00:12:12.193 END TEST filesystem_btrfs 00:12:12.193 ************************************ 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.193 ************************************ 00:12:12.193 START TEST filesystem_xfs 00:12:12.193 ************************************ 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:12.193 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:12.193 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:12.193 = sectsz=512 attr=2, projid32bit=1 00:12:12.193 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:12.193 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:12.193 data = bsize=4096 blocks=130560, imaxpct=25 00:12:12.193 = sunit=0 swidth=0 blks 00:12:12.193 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:12.193 log =internal log bsize=4096 blocks=16384, version=2 00:12:12.193 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:12.193 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:13.126 Discarding blocks...Done. 00:12:13.126 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:13.126 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:15.021 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:15.021 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:15.021 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:15.021 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:15.021 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:15.021 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:15.022 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1282117 00:12:15.022 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:15.022 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:15.022 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:15.022 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:15.022 00:12:15.022 real 0m2.845s 00:12:15.022 user 0m0.016s 00:12:15.022 sys 0m0.061s 00:12:15.022 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.022 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:15.022 ************************************ 00:12:15.022 END TEST filesystem_xfs 00:12:15.022 ************************************ 00:12:15.022 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1282117 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1282117 ']' 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1282117 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1282117 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1282117' 00:12:15.279 killing process with pid 1282117 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1282117 00:12:15.279 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1282117 00:12:15.845 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:15.845 00:12:15.845 real 0m16.680s 00:12:15.845 user 1m4.526s 00:12:15.845 sys 0m2.128s 00:12:15.845 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.845 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.845 ************************************ 00:12:15.845 END TEST nvmf_filesystem_no_in_capsule 00:12:15.845 ************************************ 00:12:15.845 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:15.845 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:15.845 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.845 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:15.845 ************************************ 00:12:15.845 START TEST nvmf_filesystem_in_capsule 00:12:15.845 ************************************ 00:12:15.845 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:15.845 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:15.845 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:15.845 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:15.845 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.845 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.845 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1284221 00:12:15.845 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.845 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1284221 00:12:15.845 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1284221 ']' 00:12:15.845 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.845 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.845 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.845 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.845 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.845 [2024-11-19 10:40:03.335359] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:12:15.845 [2024-11-19 10:40:03.335442] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.845 [2024-11-19 10:40:03.406656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.845 [2024-11-19 10:40:03.463550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.845 [2024-11-19 10:40:03.463614] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.845 [2024-11-19 10:40:03.463643] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.845 [2024-11-19 10:40:03.463654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.845 [2024-11-19 10:40:03.463664] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.845 [2024-11-19 10:40:03.465318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.845 [2024-11-19 10:40:03.465396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.845 [2024-11-19 10:40:03.465423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.845 [2024-11-19 10:40:03.465427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.103 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.103 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:16.103 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:16.103 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:16.103 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.103 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.103 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:16.103 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:16.103 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.103 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.103 [2024-11-19 10:40:03.611845] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.103 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.103 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:16.103 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.103 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.361 Malloc1 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.361 [2024-11-19 10:40:03.808192] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:16.361 { 00:12:16.361 "name": "Malloc1", 00:12:16.361 "aliases": [ 00:12:16.361 "8553058b-27f6-4529-b1e6-4f3800a95b34" 00:12:16.361 ], 00:12:16.361 "product_name": "Malloc disk", 00:12:16.361 "block_size": 512, 00:12:16.361 "num_blocks": 1048576, 00:12:16.361 "uuid": "8553058b-27f6-4529-b1e6-4f3800a95b34", 00:12:16.361 "assigned_rate_limits": { 00:12:16.361 "rw_ios_per_sec": 0, 00:12:16.361 "rw_mbytes_per_sec": 0, 00:12:16.361 "r_mbytes_per_sec": 0, 00:12:16.361 "w_mbytes_per_sec": 0 00:12:16.361 }, 00:12:16.361 "claimed": true, 00:12:16.361 "claim_type": "exclusive_write", 00:12:16.361 "zoned": false, 00:12:16.361 "supported_io_types": { 00:12:16.361 "read": true, 00:12:16.361 "write": true, 00:12:16.361 "unmap": true, 00:12:16.361 "flush": true, 00:12:16.361 "reset": true, 00:12:16.361 "nvme_admin": false, 00:12:16.361 "nvme_io": false, 00:12:16.361 "nvme_io_md": false, 00:12:16.361 "write_zeroes": true, 00:12:16.361 "zcopy": true, 00:12:16.361 "get_zone_info": false, 00:12:16.361 "zone_management": false, 00:12:16.361 "zone_append": false, 00:12:16.361 "compare": false, 00:12:16.361 "compare_and_write": false, 00:12:16.361 "abort": true, 00:12:16.361 "seek_hole": false, 00:12:16.361 "seek_data": false, 00:12:16.361 "copy": true, 00:12:16.361 "nvme_iov_md": false 00:12:16.361 }, 00:12:16.361 "memory_domains": [ 00:12:16.361 { 00:12:16.361 "dma_device_id": "system", 00:12:16.361 "dma_device_type": 1 00:12:16.361 }, 00:12:16.361 { 00:12:16.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.361 "dma_device_type": 2 00:12:16.361 } 00:12:16.361 ], 00:12:16.361 "driver_specific": {} 00:12:16.361 } 00:12:16.361 ]' 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:16.361 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:17.294 10:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:17.294 10:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:17.294 10:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:17.294 10:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:17.294 10:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:19.191 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:19.191 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:19.191 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:19.191 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:19.191 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:19.191 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:19.191 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:19.191 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:19.191 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:19.191 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:19.191 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:19.191 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:19.191 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:19.191 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:19.191 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:19.191 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:19.191 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:19.448 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:20.379 10:40:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:21.312 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:21.312 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:21.312 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:21.312 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.312 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.312 ************************************ 00:12:21.312 START TEST filesystem_in_capsule_ext4 00:12:21.312 ************************************ 00:12:21.312 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:21.312 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:21.312 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:21.312 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:21.312 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:21.312 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:21.312 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:21.312 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:21.312 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:21.312 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:21.312 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:21.312 mke2fs 1.47.0 (5-Feb-2023) 00:12:21.312 Discarding device blocks: 0/522240 done 00:12:21.312 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:21.312 Filesystem UUID: 2eaf4115-2b80-47d8-becc-2f293cd68dac 00:12:21.312 Superblock backups stored on blocks: 00:12:21.312 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:21.312 00:12:21.312 Allocating group tables: 0/64 done 00:12:21.312 Writing inode tables: 0/64 done 00:12:23.209 Creating journal (8192 blocks): done 00:12:24.961 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:12:24.961 00:12:24.961 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:24.961 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:31.511 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:31.511 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1284221 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:31.512 00:12:31.512 real 0m9.782s 00:12:31.512 user 0m0.025s 00:12:31.512 sys 0m0.059s 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:31.512 ************************************ 00:12:31.512 END TEST filesystem_in_capsule_ext4 00:12:31.512 ************************************ 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:31.512 ************************************ 00:12:31.512 START TEST filesystem_in_capsule_btrfs 00:12:31.512 ************************************ 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:31.512 btrfs-progs v6.8.1 00:12:31.512 See https://btrfs.readthedocs.io for more information. 00:12:31.512 00:12:31.512 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:31.512 NOTE: several default settings have changed in version 5.15, please make sure 00:12:31.512 this does not affect your deployments: 00:12:31.512 - DUP for metadata (-m dup) 00:12:31.512 - enabled no-holes (-O no-holes) 00:12:31.512 - enabled free-space-tree (-R free-space-tree) 00:12:31.512 00:12:31.512 Label: (null) 00:12:31.512 UUID: 89811ab8-98c4-4970-9955-e2a6eede5ac8 00:12:31.512 Node size: 16384 00:12:31.512 Sector size: 4096 (CPU page size: 4096) 00:12:31.512 Filesystem size: 510.00MiB 00:12:31.512 Block group profiles: 00:12:31.512 Data: single 8.00MiB 00:12:31.512 Metadata: DUP 32.00MiB 00:12:31.512 System: DUP 8.00MiB 00:12:31.512 SSD detected: yes 00:12:31.512 Zoned device: no 00:12:31.512 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:31.512 Checksum: crc32c 00:12:31.512 Number of devices: 1 00:12:31.512 Devices: 00:12:31.512 ID SIZE PATH 00:12:31.512 1 510.00MiB /dev/nvme0n1p1 00:12:31.512 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:31.512 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:32.156 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:32.156 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:32.156 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:32.156 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:32.156 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:32.156 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:32.156 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1284221 00:12:32.156 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:32.156 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:32.156 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:32.156 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:32.156 00:12:32.156 real 0m1.020s 00:12:32.156 user 0m0.014s 00:12:32.156 sys 0m0.104s 00:12:32.156 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.156 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:32.157 ************************************ 00:12:32.157 END TEST filesystem_in_capsule_btrfs 00:12:32.157 ************************************ 00:12:32.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:32.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:32.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:32.157 ************************************ 00:12:32.157 START TEST filesystem_in_capsule_xfs 00:12:32.157 ************************************ 00:12:32.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:32.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:32.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:32.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:32.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:32.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:32.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:32.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:32.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:32.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:32.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:32.440 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:32.440 = sectsz=512 attr=2, projid32bit=1 00:12:32.440 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:32.440 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:32.440 data = bsize=4096 blocks=130560, imaxpct=25 00:12:32.440 = sunit=0 swidth=0 blks 00:12:32.440 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:32.440 log =internal log bsize=4096 blocks=16384, version=2 00:12:32.440 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:32.440 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:33.005 Discarding blocks...Done. 00:12:33.005 10:40:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:33.005 10:40:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:34.905 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:34.905 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:34.905 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:34.905 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:34.905 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:34.905 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:34.905 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1284221 00:12:34.905 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:34.905 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:34.905 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:34.905 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:34.905 00:12:34.905 real 0m2.809s 00:12:34.905 user 0m0.024s 00:12:34.905 sys 0m0.055s 00:12:34.905 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.905 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:34.905 ************************************ 00:12:34.905 END TEST filesystem_in_capsule_xfs 00:12:34.905 ************************************ 00:12:34.905 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:34.905 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:34.905 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.162 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.162 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:35.162 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:35.162 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.162 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:35.162 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.162 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:35.162 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.162 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.162 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:35.162 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.162 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:35.162 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1284221 00:12:35.162 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1284221 ']' 00:12:35.162 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1284221 00:12:35.162 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:35.162 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.162 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1284221 00:12:35.162 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:35.162 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:35.162 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1284221' 00:12:35.162 killing process with pid 1284221 00:12:35.162 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1284221 00:12:35.162 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1284221 00:12:35.420 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:35.420 00:12:35.420 real 0m19.740s 00:12:35.420 user 1m16.493s 00:12:35.420 sys 0m2.395s 00:12:35.420 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.420 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:35.420 ************************************ 00:12:35.420 END TEST nvmf_filesystem_in_capsule 00:12:35.420 ************************************ 00:12:35.679 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:35.679 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:35.679 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:35.679 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:35.679 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:35.679 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:35.679 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:35.679 rmmod nvme_tcp 00:12:35.679 rmmod nvme_fabrics 00:12:35.679 rmmod nvme_keyring 00:12:35.679 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:35.679 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:35.679 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:35.679 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:35.679 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:35.679 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:35.679 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:35.679 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:35.679 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:35.679 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:35.679 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:35.679 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:35.679 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:35.679 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.679 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.679 10:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.585 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:37.585 00:12:37.585 real 0m41.495s 00:12:37.585 user 2m22.182s 00:12:37.585 sys 0m6.451s 00:12:37.585 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.585 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:37.585 ************************************ 00:12:37.585 END TEST nvmf_filesystem 00:12:37.585 ************************************ 00:12:37.585 10:40:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:37.585 10:40:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:37.585 10:40:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.585 10:40:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:37.585 ************************************ 00:12:37.585 START TEST nvmf_target_discovery 00:12:37.585 ************************************ 00:12:37.585 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:37.845 * Looking for test storage... 00:12:37.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:37.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.845 --rc genhtml_branch_coverage=1 00:12:37.845 --rc genhtml_function_coverage=1 00:12:37.845 --rc genhtml_legend=1 00:12:37.845 --rc geninfo_all_blocks=1 00:12:37.845 --rc geninfo_unexecuted_blocks=1 00:12:37.845 00:12:37.845 ' 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:37.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.845 --rc genhtml_branch_coverage=1 00:12:37.845 --rc genhtml_function_coverage=1 00:12:37.845 --rc genhtml_legend=1 00:12:37.845 --rc geninfo_all_blocks=1 00:12:37.845 --rc geninfo_unexecuted_blocks=1 00:12:37.845 00:12:37.845 ' 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:37.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.845 --rc genhtml_branch_coverage=1 00:12:37.845 --rc genhtml_function_coverage=1 00:12:37.845 --rc genhtml_legend=1 00:12:37.845 --rc geninfo_all_blocks=1 00:12:37.845 --rc geninfo_unexecuted_blocks=1 00:12:37.845 00:12:37.845 ' 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:37.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.845 --rc genhtml_branch_coverage=1 00:12:37.845 --rc genhtml_function_coverage=1 00:12:37.845 --rc genhtml_legend=1 00:12:37.845 --rc geninfo_all_blocks=1 00:12:37.845 --rc geninfo_unexecuted_blocks=1 00:12:37.845 00:12:37.845 ' 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.845 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:37.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:37.846 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:40.377 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:40.377 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.377 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:40.378 Found net devices under 0000:09:00.0: cvl_0_0 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:40.378 Found net devices under 0000:09:00.1: cvl_0_1 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:40.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:12:40.378 00:12:40.378 --- 10.0.0.2 ping statistics --- 00:12:40.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.378 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:40.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:12:40.378 00:12:40.378 --- 10.0.0.1 ping statistics --- 00:12:40.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.378 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1288770 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1288770 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1288770 ']' 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.378 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.378 [2024-11-19 10:40:27.750924] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:12:40.378 [2024-11-19 10:40:27.751004] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.378 [2024-11-19 10:40:27.822159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:40.378 [2024-11-19 10:40:27.881956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.378 [2024-11-19 10:40:27.882004] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.378 [2024-11-19 10:40:27.882033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.378 [2024-11-19 10:40:27.882045] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.378 [2024-11-19 10:40:27.882054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.378 [2024-11-19 10:40:27.883719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.378 [2024-11-19 10:40:27.883778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:40.378 [2024-11-19 10:40:27.883824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:40.378 [2024-11-19 10:40:27.883827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.636 [2024-11-19 10:40:28.037182] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.636 Null1 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.636 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.637 [2024-11-19 10:40:28.077488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.637 Null2 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.637 Null3 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.637 Null4 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.637 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:12:40.895 00:12:40.895 Discovery Log Number of Records 6, Generation counter 6 00:12:40.895 =====Discovery Log Entry 0====== 00:12:40.895 trtype: tcp 00:12:40.895 adrfam: ipv4 00:12:40.895 subtype: current discovery subsystem 00:12:40.895 treq: not required 00:12:40.895 portid: 0 00:12:40.895 trsvcid: 4420 00:12:40.895 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:40.895 traddr: 10.0.0.2 00:12:40.895 eflags: explicit discovery connections, duplicate discovery information 00:12:40.895 sectype: none 00:12:40.895 =====Discovery Log Entry 1====== 00:12:40.895 trtype: tcp 00:12:40.895 adrfam: ipv4 00:12:40.895 subtype: nvme subsystem 00:12:40.895 treq: not required 00:12:40.895 portid: 0 00:12:40.895 trsvcid: 4420 00:12:40.895 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:40.895 traddr: 10.0.0.2 00:12:40.895 eflags: none 00:12:40.895 sectype: none 00:12:40.895 =====Discovery Log Entry 2====== 00:12:40.895 trtype: tcp 00:12:40.895 adrfam: ipv4 00:12:40.895 subtype: nvme subsystem 00:12:40.895 treq: not required 00:12:40.895 portid: 0 00:12:40.895 trsvcid: 4420 00:12:40.895 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:40.895 traddr: 10.0.0.2 00:12:40.895 eflags: none 00:12:40.895 sectype: none 00:12:40.895 =====Discovery Log Entry 3====== 00:12:40.895 trtype: tcp 00:12:40.895 adrfam: ipv4 00:12:40.895 subtype: nvme subsystem 00:12:40.895 treq: not required 00:12:40.895 portid: 0 00:12:40.895 trsvcid: 4420 00:12:40.895 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:40.895 traddr: 10.0.0.2 00:12:40.895 eflags: none 00:12:40.895 sectype: none 00:12:40.895 =====Discovery Log Entry 4====== 00:12:40.895 trtype: tcp 00:12:40.895 adrfam: ipv4 00:12:40.895 subtype: nvme subsystem 00:12:40.895 treq: not required 00:12:40.895 portid: 0 00:12:40.895 trsvcid: 4420 00:12:40.895 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:40.895 traddr: 10.0.0.2 00:12:40.895 eflags: none 00:12:40.895 sectype: none 00:12:40.895 =====Discovery Log Entry 5====== 00:12:40.895 trtype: tcp 00:12:40.895 adrfam: ipv4 00:12:40.895 subtype: discovery subsystem referral 00:12:40.895 treq: not required 00:12:40.895 portid: 0 00:12:40.895 trsvcid: 4430 00:12:40.895 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:40.895 traddr: 10.0.0.2 00:12:40.895 eflags: none 00:12:40.895 sectype: none 00:12:40.895 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:40.895 Perform nvmf subsystem discovery via RPC 00:12:40.895 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:40.895 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.895 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.895 [ 00:12:40.895 { 00:12:40.895 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:40.895 "subtype": "Discovery", 00:12:40.895 "listen_addresses": [ 00:12:40.895 { 00:12:40.895 "trtype": "TCP", 00:12:40.895 "adrfam": "IPv4", 00:12:40.895 "traddr": "10.0.0.2", 00:12:40.895 "trsvcid": "4420" 00:12:40.895 } 00:12:40.895 ], 00:12:40.895 "allow_any_host": true, 00:12:40.895 "hosts": [] 00:12:40.895 }, 00:12:40.895 { 00:12:40.895 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:40.895 "subtype": "NVMe", 00:12:40.895 "listen_addresses": [ 00:12:40.895 { 00:12:40.895 "trtype": "TCP", 00:12:40.895 "adrfam": "IPv4", 00:12:40.895 "traddr": "10.0.0.2", 00:12:40.895 "trsvcid": "4420" 00:12:40.895 } 00:12:40.895 ], 00:12:40.895 "allow_any_host": true, 00:12:40.895 "hosts": [], 00:12:40.895 "serial_number": "SPDK00000000000001", 00:12:40.895 "model_number": "SPDK bdev Controller", 00:12:40.895 "max_namespaces": 32, 00:12:40.895 "min_cntlid": 1, 00:12:40.895 "max_cntlid": 65519, 00:12:40.895 "namespaces": [ 00:12:40.895 { 00:12:40.895 "nsid": 1, 00:12:40.895 "bdev_name": "Null1", 00:12:40.895 "name": "Null1", 00:12:40.895 "nguid": "94C386E8653D4A6A9DCCA9EC0C238BA6", 00:12:40.895 "uuid": "94c386e8-653d-4a6a-9dcc-a9ec0c238ba6" 00:12:40.895 } 00:12:40.895 ] 00:12:40.895 }, 00:12:40.895 { 00:12:40.895 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:40.895 "subtype": "NVMe", 00:12:40.895 "listen_addresses": [ 00:12:40.895 { 00:12:40.895 "trtype": "TCP", 00:12:40.895 "adrfam": "IPv4", 00:12:40.895 "traddr": "10.0.0.2", 00:12:40.895 "trsvcid": "4420" 00:12:40.895 } 00:12:40.895 ], 00:12:40.895 "allow_any_host": true, 00:12:40.895 "hosts": [], 00:12:40.895 "serial_number": "SPDK00000000000002", 00:12:40.895 "model_number": "SPDK bdev Controller", 00:12:40.895 "max_namespaces": 32, 00:12:40.895 "min_cntlid": 1, 00:12:40.895 "max_cntlid": 65519, 00:12:40.895 "namespaces": [ 00:12:40.895 { 00:12:40.895 "nsid": 1, 00:12:40.895 "bdev_name": "Null2", 00:12:40.895 "name": "Null2", 00:12:40.895 "nguid": "80BC1AE12851457AA16ACC693C338B7A", 00:12:40.895 "uuid": "80bc1ae1-2851-457a-a16a-cc693c338b7a" 00:12:40.895 } 00:12:40.895 ] 00:12:40.895 }, 00:12:40.895 { 00:12:40.896 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:40.896 "subtype": "NVMe", 00:12:40.896 "listen_addresses": [ 00:12:40.896 { 00:12:40.896 "trtype": "TCP", 00:12:40.896 "adrfam": "IPv4", 00:12:40.896 "traddr": "10.0.0.2", 00:12:40.896 "trsvcid": "4420" 00:12:40.896 } 00:12:40.896 ], 00:12:40.896 "allow_any_host": true, 00:12:40.896 "hosts": [], 00:12:40.896 "serial_number": "SPDK00000000000003", 00:12:40.896 "model_number": "SPDK bdev Controller", 00:12:40.896 "max_namespaces": 32, 00:12:40.896 "min_cntlid": 1, 00:12:40.896 "max_cntlid": 65519, 00:12:40.896 "namespaces": [ 00:12:40.896 { 00:12:40.896 "nsid": 1, 00:12:40.896 "bdev_name": "Null3", 00:12:40.896 "name": "Null3", 00:12:40.896 "nguid": "7EBC71C894484AF1A8A1FAE45E3CFF9B", 00:12:40.896 "uuid": "7ebc71c8-9448-4af1-a8a1-fae45e3cff9b" 00:12:40.896 } 00:12:40.896 ] 00:12:40.896 }, 00:12:40.896 { 00:12:40.896 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:40.896 "subtype": "NVMe", 00:12:40.896 "listen_addresses": [ 00:12:40.896 { 00:12:40.896 "trtype": "TCP", 00:12:40.896 "adrfam": "IPv4", 00:12:40.896 "traddr": "10.0.0.2", 00:12:40.896 "trsvcid": "4420" 00:12:40.896 } 00:12:40.896 ], 00:12:40.896 "allow_any_host": true, 00:12:40.896 "hosts": [], 00:12:40.896 "serial_number": "SPDK00000000000004", 00:12:40.896 "model_number": "SPDK bdev Controller", 00:12:40.896 "max_namespaces": 32, 00:12:40.896 "min_cntlid": 1, 00:12:40.896 "max_cntlid": 65519, 00:12:40.896 "namespaces": [ 00:12:40.896 { 00:12:40.896 "nsid": 1, 00:12:40.896 "bdev_name": "Null4", 00:12:40.896 "name": "Null4", 00:12:40.896 "nguid": "D92EC9D8DF60441AA3D478A83424F260", 00:12:40.896 "uuid": "d92ec9d8-df60-441a-a3d4-78a83424f260" 00:12:40.896 } 00:12:40.896 ] 00:12:40.896 } 00:12:40.896 ] 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.896 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:41.154 rmmod nvme_tcp 00:12:41.154 rmmod nvme_fabrics 00:12:41.154 rmmod nvme_keyring 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1288770 ']' 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1288770 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1288770 ']' 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1288770 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1288770 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1288770' 00:12:41.154 killing process with pid 1288770 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1288770 00:12:41.154 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1288770 00:12:41.413 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:41.414 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:41.414 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:41.414 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:41.414 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:41.414 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:41.414 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:41.414 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:41.414 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:41.414 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.414 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.414 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.320 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:43.320 00:12:43.320 real 0m5.715s 00:12:43.320 user 0m4.816s 00:12:43.320 sys 0m2.025s 00:12:43.320 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.320 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.320 ************************************ 00:12:43.320 END TEST nvmf_target_discovery 00:12:43.320 ************************************ 00:12:43.320 10:40:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:43.320 10:40:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:43.320 10:40:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.320 10:40:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:43.578 ************************************ 00:12:43.578 START TEST nvmf_referrals 00:12:43.578 ************************************ 00:12:43.578 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:43.578 * Looking for test storage... 00:12:43.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:43.578 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:43.578 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:43.578 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:43.578 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:43.578 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:43.578 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:43.578 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:43.578 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:43.578 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:43.578 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:43.578 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:43.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.579 --rc genhtml_branch_coverage=1 00:12:43.579 --rc genhtml_function_coverage=1 00:12:43.579 --rc genhtml_legend=1 00:12:43.579 --rc geninfo_all_blocks=1 00:12:43.579 --rc geninfo_unexecuted_blocks=1 00:12:43.579 00:12:43.579 ' 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:43.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.579 --rc genhtml_branch_coverage=1 00:12:43.579 --rc genhtml_function_coverage=1 00:12:43.579 --rc genhtml_legend=1 00:12:43.579 --rc geninfo_all_blocks=1 00:12:43.579 --rc geninfo_unexecuted_blocks=1 00:12:43.579 00:12:43.579 ' 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:43.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.579 --rc genhtml_branch_coverage=1 00:12:43.579 --rc genhtml_function_coverage=1 00:12:43.579 --rc genhtml_legend=1 00:12:43.579 --rc geninfo_all_blocks=1 00:12:43.579 --rc geninfo_unexecuted_blocks=1 00:12:43.579 00:12:43.579 ' 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:43.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.579 --rc genhtml_branch_coverage=1 00:12:43.579 --rc genhtml_function_coverage=1 00:12:43.579 --rc genhtml_legend=1 00:12:43.579 --rc geninfo_all_blocks=1 00:12:43.579 --rc geninfo_unexecuted_blocks=1 00:12:43.579 00:12:43.579 ' 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.579 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:43.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:43.580 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:46.110 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:46.110 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:46.110 Found net devices under 0000:09:00.0: cvl_0_0 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:46.110 Found net devices under 0000:09:00.1: cvl_0_1 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.110 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:46.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:12:46.111 00:12:46.111 --- 10.0.0.2 ping statistics --- 00:12:46.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.111 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:46.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:12:46.111 00:12:46.111 --- 10.0.0.1 ping statistics --- 00:12:46.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.111 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1290875 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1290875 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1290875 ']' 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.111 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.111 [2024-11-19 10:40:33.504644] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:12:46.111 [2024-11-19 10:40:33.504734] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.111 [2024-11-19 10:40:33.578219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.111 [2024-11-19 10:40:33.638827] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.111 [2024-11-19 10:40:33.638880] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.111 [2024-11-19 10:40:33.638909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.111 [2024-11-19 10:40:33.638920] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.111 [2024-11-19 10:40:33.638930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.111 [2024-11-19 10:40:33.640701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.111 [2024-11-19 10:40:33.640764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.111 [2024-11-19 10:40:33.640830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.111 [2024-11-19 10:40:33.640833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.369 [2024-11-19 10:40:33.801264] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.369 [2024-11-19 10:40:33.813537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:46.369 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:46.370 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:46.370 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:46.370 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:46.628 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:46.886 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:47.144 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:47.144 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:47.144 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:47.144 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:47.144 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:47.144 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:47.144 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:47.401 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:47.401 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:47.401 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:47.401 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:47.401 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:47.401 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:47.401 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:47.401 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:47.401 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.401 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.401 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.401 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:47.401 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:47.401 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:47.401 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.401 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:47.401 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.401 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:47.401 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.401 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:47.401 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:47.401 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:47.401 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:47.401 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:47.401 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:47.402 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:47.402 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:47.659 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:47.659 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:47.659 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:47.659 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:47.659 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:47.659 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:47.659 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:47.659 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:47.659 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:47.659 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:47.659 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:47.659 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:47.659 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:47.916 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:47.917 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:47.917 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.917 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.917 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.917 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:47.917 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:47.917 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.917 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.917 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.917 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:47.917 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:47.917 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:47.917 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:47.917 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:47.917 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:47.917 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:48.174 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:48.174 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:48.174 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:48.174 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:48.174 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:48.174 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:48.174 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:48.174 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:48.174 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:48.174 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:48.174 rmmod nvme_tcp 00:12:48.174 rmmod nvme_fabrics 00:12:48.174 rmmod nvme_keyring 00:12:48.174 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:48.174 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:48.174 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:48.174 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1290875 ']' 00:12:48.174 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1290875 00:12:48.174 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1290875 ']' 00:12:48.174 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1290875 00:12:48.174 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:48.174 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:48.174 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1290875 00:12:48.433 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:48.433 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:48.433 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1290875' 00:12:48.433 killing process with pid 1290875 00:12:48.433 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1290875 00:12:48.433 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1290875 00:12:48.433 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:48.433 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:48.433 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:48.433 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:48.433 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:48.433 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:48.433 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:48.433 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:48.433 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:48.433 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.433 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.433 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:50.972 00:12:50.972 real 0m7.101s 00:12:50.972 user 0m11.021s 00:12:50.972 sys 0m2.321s 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.972 ************************************ 00:12:50.972 END TEST nvmf_referrals 00:12:50.972 ************************************ 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:50.972 ************************************ 00:12:50.972 START TEST nvmf_connect_disconnect 00:12:50.972 ************************************ 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:50.972 * Looking for test storage... 00:12:50.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:50.972 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:50.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.973 --rc genhtml_branch_coverage=1 00:12:50.973 --rc genhtml_function_coverage=1 00:12:50.973 --rc genhtml_legend=1 00:12:50.973 --rc geninfo_all_blocks=1 00:12:50.973 --rc geninfo_unexecuted_blocks=1 00:12:50.973 00:12:50.973 ' 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:50.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.973 --rc genhtml_branch_coverage=1 00:12:50.973 --rc genhtml_function_coverage=1 00:12:50.973 --rc genhtml_legend=1 00:12:50.973 --rc geninfo_all_blocks=1 00:12:50.973 --rc geninfo_unexecuted_blocks=1 00:12:50.973 00:12:50.973 ' 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:50.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.973 --rc genhtml_branch_coverage=1 00:12:50.973 --rc genhtml_function_coverage=1 00:12:50.973 --rc genhtml_legend=1 00:12:50.973 --rc geninfo_all_blocks=1 00:12:50.973 --rc geninfo_unexecuted_blocks=1 00:12:50.973 00:12:50.973 ' 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:50.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.973 --rc genhtml_branch_coverage=1 00:12:50.973 --rc genhtml_function_coverage=1 00:12:50.973 --rc genhtml_legend=1 00:12:50.973 --rc geninfo_all_blocks=1 00:12:50.973 --rc geninfo_unexecuted_blocks=1 00:12:50.973 00:12:50.973 ' 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:50.973 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.878 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.878 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:52.878 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:52.878 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:52.878 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:52.878 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:52.878 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:52.878 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:52.878 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:52.878 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:52.878 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:52.878 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:52.878 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:52.878 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:52.878 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:52.878 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:52.879 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:52.879 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:52.879 Found net devices under 0000:09:00.0: cvl_0_0 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:52.879 Found net devices under 0000:09:00.1: cvl_0_1 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.879 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:53.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:53.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:12:53.139 00:12:53.139 --- 10.0.0.2 ping statistics --- 00:12:53.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.139 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:53.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:53.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:12:53.139 00:12:53.139 --- 10.0.0.1 ping statistics --- 00:12:53.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.139 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1293174 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1293174 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1293174 ']' 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.139 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:53.139 [2024-11-19 10:40:40.599678] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:12:53.139 [2024-11-19 10:40:40.599777] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.139 [2024-11-19 10:40:40.673774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:53.139 [2024-11-19 10:40:40.730531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.139 [2024-11-19 10:40:40.730584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.139 [2024-11-19 10:40:40.730611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:53.139 [2024-11-19 10:40:40.730622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:53.139 [2024-11-19 10:40:40.730631] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.139 [2024-11-19 10:40:40.732293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.139 [2024-11-19 10:40:40.732363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.139 [2024-11-19 10:40:40.732358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.139 [2024-11-19 10:40:40.732329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.398 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.398 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:53.398 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:53.398 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:53.398 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:53.398 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.398 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:53.398 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.398 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:53.398 [2024-11-19 10:40:40.881471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.398 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.398 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:53.398 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.398 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:53.398 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.398 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:53.398 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:53.398 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.398 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:53.398 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.398 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:53.398 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.398 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:53.399 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.399 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.399 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.399 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:53.399 [2024-11-19 10:40:40.949807] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.399 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.399 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:53.399 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:53.399 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:56.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:07.538 rmmod nvme_tcp 00:13:07.538 rmmod nvme_fabrics 00:13:07.538 rmmod nvme_keyring 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1293174 ']' 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1293174 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1293174 ']' 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1293174 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1293174 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1293174' 00:13:07.538 killing process with pid 1293174 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1293174 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1293174 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.538 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.444 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:09.444 00:13:09.444 real 0m18.783s 00:13:09.444 user 0m56.081s 00:13:09.444 sys 0m3.411s 00:13:09.444 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.444 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:09.444 ************************************ 00:13:09.444 END TEST nvmf_connect_disconnect 00:13:09.444 ************************************ 00:13:09.444 10:40:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:09.444 10:40:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:09.444 10:40:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.444 10:40:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:09.444 ************************************ 00:13:09.444 START TEST nvmf_multitarget 00:13:09.444 ************************************ 00:13:09.444 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:09.444 * Looking for test storage... 00:13:09.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:09.444 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:09.444 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:13:09.444 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:09.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.706 --rc genhtml_branch_coverage=1 00:13:09.706 --rc genhtml_function_coverage=1 00:13:09.706 --rc genhtml_legend=1 00:13:09.706 --rc geninfo_all_blocks=1 00:13:09.706 --rc geninfo_unexecuted_blocks=1 00:13:09.706 00:13:09.706 ' 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:09.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.706 --rc genhtml_branch_coverage=1 00:13:09.706 --rc genhtml_function_coverage=1 00:13:09.706 --rc genhtml_legend=1 00:13:09.706 --rc geninfo_all_blocks=1 00:13:09.706 --rc geninfo_unexecuted_blocks=1 00:13:09.706 00:13:09.706 ' 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:09.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.706 --rc genhtml_branch_coverage=1 00:13:09.706 --rc genhtml_function_coverage=1 00:13:09.706 --rc genhtml_legend=1 00:13:09.706 --rc geninfo_all_blocks=1 00:13:09.706 --rc geninfo_unexecuted_blocks=1 00:13:09.706 00:13:09.706 ' 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:09.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.706 --rc genhtml_branch_coverage=1 00:13:09.706 --rc genhtml_function_coverage=1 00:13:09.706 --rc genhtml_legend=1 00:13:09.706 --rc geninfo_all_blocks=1 00:13:09.706 --rc geninfo_unexecuted_blocks=1 00:13:09.706 00:13:09.706 ' 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:09.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:09.706 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:09.707 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:09.707 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:09.707 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:09.707 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:09.707 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:09.707 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.707 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:09.707 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:09.707 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:09.707 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.707 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.707 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.707 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:09.707 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:09.707 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:09.707 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:12.300 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.300 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:12.301 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:12.301 Found net devices under 0000:09:00.0: cvl_0_0 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:12.301 Found net devices under 0000:09:00.1: cvl_0_1 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:12.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:13:12.301 00:13:12.301 --- 10.0.0.2 ping statistics --- 00:13:12.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.301 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:12.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:13:12.301 00:13:12.301 --- 10.0.0.1 ping statistics --- 00:13:12.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.301 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1296887 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1296887 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1296887 ']' 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:12.301 [2024-11-19 10:40:59.489440] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:13:12.301 [2024-11-19 10:40:59.489523] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.301 [2024-11-19 10:40:59.559421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:12.301 [2024-11-19 10:40:59.618528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.301 [2024-11-19 10:40:59.618573] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.301 [2024-11-19 10:40:59.618603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.301 [2024-11-19 10:40:59.618615] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.301 [2024-11-19 10:40:59.618624] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.301 [2024-11-19 10:40:59.620278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.301 [2024-11-19 10:40:59.620368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:12.301 [2024-11-19 10:40:59.620342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.301 [2024-11-19 10:40:59.620371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.301 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:12.302 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:13:12.302 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:12.302 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:12.302 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:12.302 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.302 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:12.302 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:12.302 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:12.302 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:12.302 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:12.583 "nvmf_tgt_1" 00:13:12.583 10:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:12.583 "nvmf_tgt_2" 00:13:12.583 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:12.583 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:12.840 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:12.840 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:12.840 true 00:13:12.840 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:12.840 true 00:13:12.840 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:12.841 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:13.098 rmmod nvme_tcp 00:13:13.098 rmmod nvme_fabrics 00:13:13.098 rmmod nvme_keyring 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1296887 ']' 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1296887 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1296887 ']' 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1296887 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1296887 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1296887' 00:13:13.098 killing process with pid 1296887 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1296887 00:13:13.098 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1296887 00:13:13.356 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:13.356 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:13.356 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:13.356 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:13.356 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:13:13.356 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:13.356 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:13:13.356 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:13.356 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:13.356 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.357 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.357 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.895 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:15.895 00:13:15.895 real 0m6.009s 00:13:15.895 user 0m6.809s 00:13:15.895 sys 0m2.062s 00:13:15.895 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.895 10:41:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:15.895 ************************************ 00:13:15.895 END TEST nvmf_multitarget 00:13:15.895 ************************************ 00:13:15.895 10:41:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:15.895 10:41:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:15.895 10:41:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.895 10:41:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:15.895 ************************************ 00:13:15.895 START TEST nvmf_rpc 00:13:15.895 ************************************ 00:13:15.895 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:15.895 * Looking for test storage... 00:13:15.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.895 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:15.895 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:15.895 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:15.895 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:15.895 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:15.895 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:15.895 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:15.895 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:15.895 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:15.895 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:15.895 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:15.895 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:15.895 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:15.895 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:15.895 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:15.895 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:15.895 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:15.895 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:15.895 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:15.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.896 --rc genhtml_branch_coverage=1 00:13:15.896 --rc genhtml_function_coverage=1 00:13:15.896 --rc genhtml_legend=1 00:13:15.896 --rc geninfo_all_blocks=1 00:13:15.896 --rc geninfo_unexecuted_blocks=1 00:13:15.896 00:13:15.896 ' 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:15.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.896 --rc genhtml_branch_coverage=1 00:13:15.896 --rc genhtml_function_coverage=1 00:13:15.896 --rc genhtml_legend=1 00:13:15.896 --rc geninfo_all_blocks=1 00:13:15.896 --rc geninfo_unexecuted_blocks=1 00:13:15.896 00:13:15.896 ' 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:15.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.896 --rc genhtml_branch_coverage=1 00:13:15.896 --rc genhtml_function_coverage=1 00:13:15.896 --rc genhtml_legend=1 00:13:15.896 --rc geninfo_all_blocks=1 00:13:15.896 --rc geninfo_unexecuted_blocks=1 00:13:15.896 00:13:15.896 ' 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:15.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.896 --rc genhtml_branch_coverage=1 00:13:15.896 --rc genhtml_function_coverage=1 00:13:15.896 --rc genhtml_legend=1 00:13:15.896 --rc geninfo_all_blocks=1 00:13:15.896 --rc geninfo_unexecuted_blocks=1 00:13:15.896 00:13:15.896 ' 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:15.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:15.896 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:15.897 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:15.897 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:17.800 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:17.800 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:17.800 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:17.801 Found net devices under 0000:09:00.0: cvl_0_0 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:17.801 Found net devices under 0000:09:00.1: cvl_0_1 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.801 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:18.059 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:18.059 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:18.059 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:18.059 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:18.059 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:18.059 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:18.059 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:18.059 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:18.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:18.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:13:18.059 00:13:18.059 --- 10.0.0.2 ping statistics --- 00:13:18.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.059 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:13:18.059 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:18.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:18.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:13:18.059 00:13:18.059 --- 10.0.0.1 ping statistics --- 00:13:18.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.059 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:13:18.059 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:18.059 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:13:18.059 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:18.059 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:18.059 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:18.059 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:18.059 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:18.059 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:18.059 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:18.059 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:18.059 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:18.059 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:18.060 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.060 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1299054 00:13:18.060 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:18.060 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1299054 00:13:18.060 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1299054 ']' 00:13:18.060 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.060 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:18.060 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.060 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:18.060 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.060 [2024-11-19 10:41:05.569367] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:13:18.060 [2024-11-19 10:41:05.569461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.060 [2024-11-19 10:41:05.639663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:18.318 [2024-11-19 10:41:05.696085] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.318 [2024-11-19 10:41:05.696130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.318 [2024-11-19 10:41:05.696157] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:18.318 [2024-11-19 10:41:05.696168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:18.318 [2024-11-19 10:41:05.696177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.318 [2024-11-19 10:41:05.697822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.318 [2024-11-19 10:41:05.697886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.318 [2024-11-19 10:41:05.697996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:18.318 [2024-11-19 10:41:05.697999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.318 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:18.318 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:18.318 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:18.318 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:18.318 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.318 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.318 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:18.318 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.318 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.318 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.318 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:18.318 "tick_rate": 2700000000, 00:13:18.318 "poll_groups": [ 00:13:18.318 { 00:13:18.318 "name": "nvmf_tgt_poll_group_000", 00:13:18.318 "admin_qpairs": 0, 00:13:18.318 "io_qpairs": 0, 00:13:18.318 "current_admin_qpairs": 0, 00:13:18.318 "current_io_qpairs": 0, 00:13:18.318 "pending_bdev_io": 0, 00:13:18.318 "completed_nvme_io": 0, 00:13:18.318 "transports": [] 00:13:18.318 }, 00:13:18.318 { 00:13:18.318 "name": "nvmf_tgt_poll_group_001", 00:13:18.318 "admin_qpairs": 0, 00:13:18.318 "io_qpairs": 0, 00:13:18.318 "current_admin_qpairs": 0, 00:13:18.318 "current_io_qpairs": 0, 00:13:18.318 "pending_bdev_io": 0, 00:13:18.318 "completed_nvme_io": 0, 00:13:18.318 "transports": [] 00:13:18.318 }, 00:13:18.318 { 00:13:18.318 "name": "nvmf_tgt_poll_group_002", 00:13:18.318 "admin_qpairs": 0, 00:13:18.318 "io_qpairs": 0, 00:13:18.318 "current_admin_qpairs": 0, 00:13:18.318 "current_io_qpairs": 0, 00:13:18.318 "pending_bdev_io": 0, 00:13:18.318 "completed_nvme_io": 0, 00:13:18.318 "transports": [] 00:13:18.318 }, 00:13:18.318 { 00:13:18.318 "name": "nvmf_tgt_poll_group_003", 00:13:18.318 "admin_qpairs": 0, 00:13:18.318 "io_qpairs": 0, 00:13:18.318 "current_admin_qpairs": 0, 00:13:18.318 "current_io_qpairs": 0, 00:13:18.318 "pending_bdev_io": 0, 00:13:18.318 "completed_nvme_io": 0, 00:13:18.318 "transports": [] 00:13:18.318 } 00:13:18.318 ] 00:13:18.318 }' 00:13:18.318 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:18.318 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:18.318 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:18.318 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:18.318 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:18.318 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:18.318 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:18.318 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:18.318 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.318 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.576 [2024-11-19 10:41:05.941891] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:18.576 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.576 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:18.576 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.576 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.576 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.576 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:18.576 "tick_rate": 2700000000, 00:13:18.576 "poll_groups": [ 00:13:18.576 { 00:13:18.576 "name": "nvmf_tgt_poll_group_000", 00:13:18.576 "admin_qpairs": 0, 00:13:18.576 "io_qpairs": 0, 00:13:18.576 "current_admin_qpairs": 0, 00:13:18.576 "current_io_qpairs": 0, 00:13:18.576 "pending_bdev_io": 0, 00:13:18.576 "completed_nvme_io": 0, 00:13:18.576 "transports": [ 00:13:18.576 { 00:13:18.576 "trtype": "TCP" 00:13:18.576 } 00:13:18.576 ] 00:13:18.576 }, 00:13:18.576 { 00:13:18.576 "name": "nvmf_tgt_poll_group_001", 00:13:18.576 "admin_qpairs": 0, 00:13:18.576 "io_qpairs": 0, 00:13:18.576 "current_admin_qpairs": 0, 00:13:18.576 "current_io_qpairs": 0, 00:13:18.576 "pending_bdev_io": 0, 00:13:18.576 "completed_nvme_io": 0, 00:13:18.576 "transports": [ 00:13:18.576 { 00:13:18.576 "trtype": "TCP" 00:13:18.576 } 00:13:18.576 ] 00:13:18.576 }, 00:13:18.576 { 00:13:18.576 "name": "nvmf_tgt_poll_group_002", 00:13:18.576 "admin_qpairs": 0, 00:13:18.576 "io_qpairs": 0, 00:13:18.576 "current_admin_qpairs": 0, 00:13:18.576 "current_io_qpairs": 0, 00:13:18.576 "pending_bdev_io": 0, 00:13:18.576 "completed_nvme_io": 0, 00:13:18.576 "transports": [ 00:13:18.576 { 00:13:18.576 "trtype": "TCP" 00:13:18.576 } 00:13:18.576 ] 00:13:18.576 }, 00:13:18.576 { 00:13:18.576 "name": "nvmf_tgt_poll_group_003", 00:13:18.576 "admin_qpairs": 0, 00:13:18.576 "io_qpairs": 0, 00:13:18.576 "current_admin_qpairs": 0, 00:13:18.576 "current_io_qpairs": 0, 00:13:18.576 "pending_bdev_io": 0, 00:13:18.576 "completed_nvme_io": 0, 00:13:18.576 "transports": [ 00:13:18.576 { 00:13:18.576 "trtype": "TCP" 00:13:18.576 } 00:13:18.576 ] 00:13:18.576 } 00:13:18.576 ] 00:13:18.576 }' 00:13:18.576 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:18.576 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:18.576 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:18.576 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:18.576 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:18.576 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.577 Malloc1 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.577 [2024-11-19 10:41:06.119000] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:13:18.577 [2024-11-19 10:41:06.141699] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:13:18.577 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:18.577 could not add new controller: failed to write to nvme-fabrics device 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.577 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:19.510 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:19.510 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:19.510 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:19.510 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:19.510 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:21.426 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:21.426 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:21.426 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:21.426 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:21.426 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.426 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:21.426 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:21.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.426 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:21.426 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:21.426 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:21.426 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.426 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:21.426 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.426 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:21.426 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:21.426 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.426 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.426 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.427 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.427 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:21.427 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.427 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:21.427 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:21.427 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:21.427 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:21.427 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:21.427 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:21.427 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:21.427 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:21.427 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.427 [2024-11-19 10:41:08.937907] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:13:21.427 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:21.427 could not add new controller: failed to write to nvme-fabrics device 00:13:21.427 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:21.427 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:21.427 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:21.427 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:21.427 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:21.427 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.427 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.427 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.427 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.993 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:21.993 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:21.993 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.993 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:21.993 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:24.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.519 [2024-11-19 10:41:11.684047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.519 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:24.776 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:24.776 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:24.776 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:24.776 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:24.776 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:27.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.302 [2024-11-19 10:41:14.439922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.302 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:27.559 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:27.559 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:27.559 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:27.559 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:27.559 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:30.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.083 [2024-11-19 10:41:17.265607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.083 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:30.648 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:30.648 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:30.648 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:30.648 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:30.648 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:32.545 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:32.545 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:32.545 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:32.545 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:32.545 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:32.545 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:32.545 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:32.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.545 [2024-11-19 10:41:20.088670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.545 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.546 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:32.546 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.546 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.546 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.546 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:32.546 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.546 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.546 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.546 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.478 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:33.478 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:33.478 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:33.478 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:33.478 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:35.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.376 [2024-11-19 10:41:22.925108] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.376 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:36.306 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:36.306 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:36.306 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:36.306 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:36.306 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:38.202 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:38.202 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:38.202 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:38.202 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:38.202 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:38.202 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:38.202 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:38.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.203 [2024-11-19 10:41:25.742798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.203 [2024-11-19 10:41:25.790859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.203 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.461 [2024-11-19 10:41:25.839012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.461 [2024-11-19 10:41:25.887173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.461 [2024-11-19 10:41:25.935367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.461 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:38.461 "tick_rate": 2700000000, 00:13:38.461 "poll_groups": [ 00:13:38.461 { 00:13:38.461 "name": "nvmf_tgt_poll_group_000", 00:13:38.461 "admin_qpairs": 2, 00:13:38.461 "io_qpairs": 84, 00:13:38.461 "current_admin_qpairs": 0, 00:13:38.461 "current_io_qpairs": 0, 00:13:38.461 "pending_bdev_io": 0, 00:13:38.461 "completed_nvme_io": 186, 00:13:38.461 "transports": [ 00:13:38.461 { 00:13:38.461 "trtype": "TCP" 00:13:38.461 } 00:13:38.461 ] 00:13:38.461 }, 00:13:38.461 { 00:13:38.462 "name": "nvmf_tgt_poll_group_001", 00:13:38.462 "admin_qpairs": 2, 00:13:38.462 "io_qpairs": 84, 00:13:38.462 "current_admin_qpairs": 0, 00:13:38.462 "current_io_qpairs": 0, 00:13:38.462 "pending_bdev_io": 0, 00:13:38.462 "completed_nvme_io": 173, 00:13:38.462 "transports": [ 00:13:38.462 { 00:13:38.462 "trtype": "TCP" 00:13:38.462 } 00:13:38.462 ] 00:13:38.462 }, 00:13:38.462 { 00:13:38.462 "name": "nvmf_tgt_poll_group_002", 00:13:38.462 "admin_qpairs": 1, 00:13:38.462 "io_qpairs": 84, 00:13:38.462 "current_admin_qpairs": 0, 00:13:38.462 "current_io_qpairs": 0, 00:13:38.462 "pending_bdev_io": 0, 00:13:38.462 "completed_nvme_io": 193, 00:13:38.462 "transports": [ 00:13:38.462 { 00:13:38.462 "trtype": "TCP" 00:13:38.462 } 00:13:38.462 ] 00:13:38.462 }, 00:13:38.462 { 00:13:38.462 "name": "nvmf_tgt_poll_group_003", 00:13:38.462 "admin_qpairs": 2, 00:13:38.462 "io_qpairs": 84, 00:13:38.462 "current_admin_qpairs": 0, 00:13:38.462 "current_io_qpairs": 0, 00:13:38.462 "pending_bdev_io": 0, 00:13:38.462 "completed_nvme_io": 134, 00:13:38.462 "transports": [ 00:13:38.462 { 00:13:38.462 "trtype": "TCP" 00:13:38.462 } 00:13:38.462 ] 00:13:38.462 } 00:13:38.462 ] 00:13:38.462 }' 00:13:38.462 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:38.462 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:38.462 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:38.462 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:38.462 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:38.462 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:38.462 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:38.462 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:38.462 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:38.462 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:38.462 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:38.462 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:38.462 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:38.462 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:38.462 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:38.462 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:38.462 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:38.462 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:38.462 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:38.462 rmmod nvme_tcp 00:13:38.719 rmmod nvme_fabrics 00:13:38.719 rmmod nvme_keyring 00:13:38.719 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:38.719 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:38.719 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:38.719 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1299054 ']' 00:13:38.719 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1299054 00:13:38.719 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1299054 ']' 00:13:38.719 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1299054 00:13:38.719 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:38.719 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:38.720 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1299054 00:13:38.720 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:38.720 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:38.720 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1299054' 00:13:38.720 killing process with pid 1299054 00:13:38.720 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1299054 00:13:38.720 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1299054 00:13:38.977 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:38.977 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:38.977 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:38.977 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:38.977 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:38.977 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:38.977 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:38.977 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:38.977 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:38.977 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.977 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:38.977 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.885 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:40.885 00:13:40.885 real 0m25.458s 00:13:40.885 user 1m22.229s 00:13:40.885 sys 0m4.300s 00:13:40.885 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.885 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.885 ************************************ 00:13:40.885 END TEST nvmf_rpc 00:13:40.885 ************************************ 00:13:40.885 10:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:40.885 10:41:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:40.885 10:41:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.885 10:41:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:41.146 ************************************ 00:13:41.146 START TEST nvmf_invalid 00:13:41.146 ************************************ 00:13:41.146 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:41.146 * Looking for test storage... 00:13:41.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:41.146 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:41.146 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:13:41.146 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:41.146 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:41.146 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:41.146 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:41.146 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:41.146 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:41.146 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:41.146 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:41.146 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:41.146 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:41.146 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:41.146 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:41.146 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:41.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.147 --rc genhtml_branch_coverage=1 00:13:41.147 --rc genhtml_function_coverage=1 00:13:41.147 --rc genhtml_legend=1 00:13:41.147 --rc geninfo_all_blocks=1 00:13:41.147 --rc geninfo_unexecuted_blocks=1 00:13:41.147 00:13:41.147 ' 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:41.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.147 --rc genhtml_branch_coverage=1 00:13:41.147 --rc genhtml_function_coverage=1 00:13:41.147 --rc genhtml_legend=1 00:13:41.147 --rc geninfo_all_blocks=1 00:13:41.147 --rc geninfo_unexecuted_blocks=1 00:13:41.147 00:13:41.147 ' 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:41.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.147 --rc genhtml_branch_coverage=1 00:13:41.147 --rc genhtml_function_coverage=1 00:13:41.147 --rc genhtml_legend=1 00:13:41.147 --rc geninfo_all_blocks=1 00:13:41.147 --rc geninfo_unexecuted_blocks=1 00:13:41.147 00:13:41.147 ' 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:41.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.147 --rc genhtml_branch_coverage=1 00:13:41.147 --rc genhtml_function_coverage=1 00:13:41.147 --rc genhtml_legend=1 00:13:41.147 --rc geninfo_all_blocks=1 00:13:41.147 --rc geninfo_unexecuted_blocks=1 00:13:41.147 00:13:41.147 ' 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:41.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:41.147 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:41.148 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:41.148 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:43.679 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:43.679 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:43.679 Found net devices under 0000:09:00.0: cvl_0_0 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:43.679 Found net devices under 0000:09:00.1: cvl_0_1 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:43.679 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:43.680 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:43.680 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:43.680 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:43.680 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.680 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.680 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.680 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:43.680 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.680 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.680 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:43.680 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:43.680 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.680 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.680 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:43.680 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:43.680 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.680 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.680 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.680 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.680 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:43.680 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:43.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:13:43.680 00:13:43.680 --- 10.0.0.2 ping statistics --- 00:13:43.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.680 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:13:43.680 00:13:43.680 --- 10.0.0.1 ping statistics --- 00:13:43.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.680 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1303558 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1303558 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1303558 ']' 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:43.680 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:43.680 [2024-11-19 10:41:31.124990] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:13:43.680 [2024-11-19 10:41:31.125079] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.680 [2024-11-19 10:41:31.197170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:43.680 [2024-11-19 10:41:31.258008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.680 [2024-11-19 10:41:31.258056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.680 [2024-11-19 10:41:31.258084] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.680 [2024-11-19 10:41:31.258096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.680 [2024-11-19 10:41:31.258106] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.680 [2024-11-19 10:41:31.259820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.680 [2024-11-19 10:41:31.259885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.680 [2024-11-19 10:41:31.259953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:43.680 [2024-11-19 10:41:31.259955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.938 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:43.938 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:43.938 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:43.938 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:43.938 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:43.938 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.938 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:43.938 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10032 00:13:44.196 [2024-11-19 10:41:31.721278] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:44.196 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:44.196 { 00:13:44.196 "nqn": "nqn.2016-06.io.spdk:cnode10032", 00:13:44.196 "tgt_name": "foobar", 00:13:44.196 "method": "nvmf_create_subsystem", 00:13:44.196 "req_id": 1 00:13:44.196 } 00:13:44.196 Got JSON-RPC error response 00:13:44.196 response: 00:13:44.196 { 00:13:44.196 "code": -32603, 00:13:44.196 "message": "Unable to find target foobar" 00:13:44.196 }' 00:13:44.196 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:44.196 { 00:13:44.196 "nqn": "nqn.2016-06.io.spdk:cnode10032", 00:13:44.196 "tgt_name": "foobar", 00:13:44.196 "method": "nvmf_create_subsystem", 00:13:44.196 "req_id": 1 00:13:44.196 } 00:13:44.196 Got JSON-RPC error response 00:13:44.196 response: 00:13:44.196 { 00:13:44.196 "code": -32603, 00:13:44.196 "message": "Unable to find target foobar" 00:13:44.196 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:44.196 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:44.196 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode25122 00:13:44.453 [2024-11-19 10:41:32.018269] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25122: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:44.453 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:44.453 { 00:13:44.453 "nqn": "nqn.2016-06.io.spdk:cnode25122", 00:13:44.453 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:44.453 "method": "nvmf_create_subsystem", 00:13:44.453 "req_id": 1 00:13:44.453 } 00:13:44.453 Got JSON-RPC error response 00:13:44.453 response: 00:13:44.453 { 00:13:44.453 "code": -32602, 00:13:44.453 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:44.453 }' 00:13:44.453 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:44.453 { 00:13:44.453 "nqn": "nqn.2016-06.io.spdk:cnode25122", 00:13:44.453 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:44.453 "method": "nvmf_create_subsystem", 00:13:44.453 "req_id": 1 00:13:44.453 } 00:13:44.453 Got JSON-RPC error response 00:13:44.453 response: 00:13:44.453 { 00:13:44.453 "code": -32602, 00:13:44.453 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:44.453 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:44.453 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:44.453 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode3365 00:13:44.710 [2024-11-19 10:41:32.283163] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3365: invalid model number 'SPDK_Controller' 00:13:44.710 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:44.710 { 00:13:44.710 "nqn": "nqn.2016-06.io.spdk:cnode3365", 00:13:44.710 "model_number": "SPDK_Controller\u001f", 00:13:44.710 "method": "nvmf_create_subsystem", 00:13:44.710 "req_id": 1 00:13:44.710 } 00:13:44.710 Got JSON-RPC error response 00:13:44.710 response: 00:13:44.710 { 00:13:44.710 "code": -32602, 00:13:44.710 "message": "Invalid MN SPDK_Controller\u001f" 00:13:44.710 }' 00:13:44.710 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:44.710 { 00:13:44.710 "nqn": "nqn.2016-06.io.spdk:cnode3365", 00:13:44.710 "model_number": "SPDK_Controller\u001f", 00:13:44.710 "method": "nvmf_create_subsystem", 00:13:44.710 "req_id": 1 00:13:44.711 } 00:13:44.711 Got JSON-RPC error response 00:13:44.711 response: 00:13:44.711 { 00:13:44.711 "code": -32602, 00:13:44.711 "message": "Invalid MN SPDK_Controller\u001f" 00:13:44.711 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.711 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:44.968 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:44.968 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:44.968 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.968 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.968 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:44.968 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:44.968 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:44.968 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.968 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.968 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:44.968 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:44.968 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:44.968 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.968 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.968 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:44.968 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:44.968 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:44.968 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.968 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.968 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:44.968 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ O == \- ]] 00:13:44.969 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Ot&m/{M!k:r*v /dev/null' 00:13:48.354 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.287 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:50.287 00:13:50.287 real 0m9.272s 00:13:50.287 user 0m22.042s 00:13:50.287 sys 0m2.654s 00:13:50.287 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.287 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:50.287 ************************************ 00:13:50.287 END TEST nvmf_invalid 00:13:50.287 ************************************ 00:13:50.287 10:41:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:50.287 10:41:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:50.287 10:41:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.287 10:41:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:50.287 ************************************ 00:13:50.287 START TEST nvmf_connect_stress 00:13:50.287 ************************************ 00:13:50.287 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:50.287 * Looking for test storage... 00:13:50.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:50.287 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:50.287 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:13:50.287 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:50.545 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:50.546 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:50.546 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:50.546 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:50.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.546 --rc genhtml_branch_coverage=1 00:13:50.546 --rc genhtml_function_coverage=1 00:13:50.546 --rc genhtml_legend=1 00:13:50.546 --rc geninfo_all_blocks=1 00:13:50.546 --rc geninfo_unexecuted_blocks=1 00:13:50.546 00:13:50.546 ' 00:13:50.546 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:50.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.546 --rc genhtml_branch_coverage=1 00:13:50.546 --rc genhtml_function_coverage=1 00:13:50.546 --rc genhtml_legend=1 00:13:50.546 --rc geninfo_all_blocks=1 00:13:50.546 --rc geninfo_unexecuted_blocks=1 00:13:50.546 00:13:50.546 ' 00:13:50.546 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:50.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.546 --rc genhtml_branch_coverage=1 00:13:50.546 --rc genhtml_function_coverage=1 00:13:50.546 --rc genhtml_legend=1 00:13:50.546 --rc geninfo_all_blocks=1 00:13:50.546 --rc geninfo_unexecuted_blocks=1 00:13:50.546 00:13:50.546 ' 00:13:50.546 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:50.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.546 --rc genhtml_branch_coverage=1 00:13:50.546 --rc genhtml_function_coverage=1 00:13:50.546 --rc genhtml_legend=1 00:13:50.546 --rc geninfo_all_blocks=1 00:13:50.546 --rc geninfo_unexecuted_blocks=1 00:13:50.546 00:13:50.546 ' 00:13:50.546 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:50.546 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:50.546 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.546 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.546 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.546 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.546 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.546 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.546 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.546 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.546 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.546 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:50.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:50.546 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.077 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:53.077 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:53.077 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:53.077 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:53.077 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:53.077 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:53.077 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:53.078 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:53.078 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:53.078 Found net devices under 0000:09:00.0: cvl_0_0 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:53.078 Found net devices under 0000:09:00.1: cvl_0_1 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:53.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:13:53.078 00:13:53.078 --- 10.0.0.2 ping statistics --- 00:13:53.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.078 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:13:53.078 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:53.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:13:53.078 00:13:53.078 --- 10.0.0.1 ping statistics --- 00:13:53.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.079 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1306208 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1306208 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1306208 ']' 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.079 [2024-11-19 10:41:40.416866] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:13:53.079 [2024-11-19 10:41:40.416969] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.079 [2024-11-19 10:41:40.494866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:53.079 [2024-11-19 10:41:40.555570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.079 [2024-11-19 10:41:40.555643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.079 [2024-11-19 10:41:40.555671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.079 [2024-11-19 10:41:40.555682] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.079 [2024-11-19 10:41:40.555692] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.079 [2024-11-19 10:41:40.557223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.079 [2024-11-19 10:41:40.557276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:53.079 [2024-11-19 10:41:40.557279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:53.079 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.337 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.337 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:53.337 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.337 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.337 [2024-11-19 10:41:40.706129] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.337 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.337 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:53.337 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.337 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.337 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.338 [2024-11-19 10:41:40.723624] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.338 NULL1 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1306345 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.338 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.597 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.597 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:13:53.597 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.597 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.597 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.855 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.855 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:13:53.855 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.855 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.855 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.421 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.421 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:13:54.421 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.421 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.421 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.679 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.679 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:13:54.679 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.679 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.679 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.937 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.937 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:13:54.937 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.937 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.937 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.194 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.194 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:13:55.194 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.194 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.194 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.452 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.452 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:13:55.452 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.452 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.452 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.016 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.016 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:13:56.017 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.017 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.017 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.274 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.274 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:13:56.274 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.274 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.274 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.531 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.531 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:13:56.531 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.531 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.531 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.788 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.788 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:13:56.788 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.788 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.788 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.046 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.046 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:13:57.046 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.046 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.046 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.610 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.610 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:13:57.611 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.611 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.611 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.868 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.868 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:13:57.868 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.868 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.868 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.125 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.125 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:13:58.125 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.125 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.125 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.383 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.383 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:13:58.383 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.383 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.383 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.640 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.640 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:13:58.640 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.640 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.640 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.204 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.204 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:13:59.204 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.204 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.204 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.461 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.461 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:13:59.461 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.461 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.461 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.718 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.718 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:13:59.718 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.718 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.718 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.975 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.975 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:13:59.975 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.975 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.976 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.538 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.538 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:14:00.538 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.538 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.538 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.795 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.795 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:14:00.795 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.795 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.795 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.051 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.051 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:14:01.051 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.051 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.051 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.308 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.308 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:14:01.308 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.308 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.308 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.564 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.564 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:14:01.565 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.565 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.565 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.128 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.128 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:14:02.128 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.128 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.128 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.384 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.384 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:14:02.384 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.384 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.384 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.642 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.642 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:14:02.642 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.642 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.642 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.899 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.899 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:14:02.899 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.899 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.899 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.157 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.157 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:14:03.157 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.157 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.157 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.415 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1306345 00:14:03.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1306345) - No such process 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1306345 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:03.673 rmmod nvme_tcp 00:14:03.673 rmmod nvme_fabrics 00:14:03.673 rmmod nvme_keyring 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1306208 ']' 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1306208 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1306208 ']' 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1306208 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1306208 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1306208' 00:14:03.673 killing process with pid 1306208 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1306208 00:14:03.673 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1306208 00:14:03.932 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:03.932 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:03.932 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:03.932 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:03.932 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:03.932 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:03.932 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:03.932 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:03.932 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:03.932 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.933 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:03.933 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.847 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:05.847 00:14:05.847 real 0m15.576s 00:14:05.847 user 0m38.600s 00:14:05.847 sys 0m5.993s 00:14:05.847 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.847 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.847 ************************************ 00:14:05.847 END TEST nvmf_connect_stress 00:14:05.847 ************************************ 00:14:05.847 10:41:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:05.847 10:41:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:05.847 10:41:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.847 10:41:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:06.106 ************************************ 00:14:06.106 START TEST nvmf_fused_ordering 00:14:06.106 ************************************ 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:06.106 * Looking for test storage... 00:14:06.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:06.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.106 --rc genhtml_branch_coverage=1 00:14:06.106 --rc genhtml_function_coverage=1 00:14:06.106 --rc genhtml_legend=1 00:14:06.106 --rc geninfo_all_blocks=1 00:14:06.106 --rc geninfo_unexecuted_blocks=1 00:14:06.106 00:14:06.106 ' 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:06.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.106 --rc genhtml_branch_coverage=1 00:14:06.106 --rc genhtml_function_coverage=1 00:14:06.106 --rc genhtml_legend=1 00:14:06.106 --rc geninfo_all_blocks=1 00:14:06.106 --rc geninfo_unexecuted_blocks=1 00:14:06.106 00:14:06.106 ' 00:14:06.106 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:06.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.106 --rc genhtml_branch_coverage=1 00:14:06.106 --rc genhtml_function_coverage=1 00:14:06.106 --rc genhtml_legend=1 00:14:06.106 --rc geninfo_all_blocks=1 00:14:06.106 --rc geninfo_unexecuted_blocks=1 00:14:06.107 00:14:06.107 ' 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:06.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.107 --rc genhtml_branch_coverage=1 00:14:06.107 --rc genhtml_function_coverage=1 00:14:06.107 --rc genhtml_legend=1 00:14:06.107 --rc geninfo_all_blocks=1 00:14:06.107 --rc geninfo_unexecuted_blocks=1 00:14:06.107 00:14:06.107 ' 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:06.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:06.107 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:08.641 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:08.641 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:08.641 Found net devices under 0000:09:00.0: cvl_0_0 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:08.641 Found net devices under 0000:09:00.1: cvl_0_1 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:08.641 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:08.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:08.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:14:08.642 00:14:08.642 --- 10.0.0.2 ping statistics --- 00:14:08.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.642 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:08.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:08.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:14:08.642 00:14:08.642 --- 10.0.0.1 ping statistics --- 00:14:08.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.642 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:08.642 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:08.642 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:08.642 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:08.642 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:08.642 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.642 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1309510 00:14:08.642 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:08.642 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1309510 00:14:08.642 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1309510 ']' 00:14:08.642 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.642 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.642 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.642 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.642 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.642 [2024-11-19 10:41:56.070207] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:14:08.642 [2024-11-19 10:41:56.070299] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.642 [2024-11-19 10:41:56.140694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.642 [2024-11-19 10:41:56.197132] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.642 [2024-11-19 10:41:56.197200] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.642 [2024-11-19 10:41:56.197213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.642 [2024-11-19 10:41:56.197237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.642 [2024-11-19 10:41:56.197247] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.642 [2024-11-19 10:41:56.197849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.900 [2024-11-19 10:41:56.337902] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.900 [2024-11-19 10:41:56.354100] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.900 NULL1 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.900 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:08.900 [2024-11-19 10:41:56.401528] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:14:08.900 [2024-11-19 10:41:56.401571] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1309538 ] 00:14:09.466 Attached to nqn.2016-06.io.spdk:cnode1 00:14:09.466 Namespace ID: 1 size: 1GB 00:14:09.466 fused_ordering(0) 00:14:09.466 fused_ordering(1) 00:14:09.466 fused_ordering(2) 00:14:09.466 fused_ordering(3) 00:14:09.466 fused_ordering(4) 00:14:09.466 fused_ordering(5) 00:14:09.466 fused_ordering(6) 00:14:09.466 fused_ordering(7) 00:14:09.466 fused_ordering(8) 00:14:09.466 fused_ordering(9) 00:14:09.466 fused_ordering(10) 00:14:09.466 fused_ordering(11) 00:14:09.466 fused_ordering(12) 00:14:09.466 fused_ordering(13) 00:14:09.466 fused_ordering(14) 00:14:09.466 fused_ordering(15) 00:14:09.466 fused_ordering(16) 00:14:09.466 fused_ordering(17) 00:14:09.466 fused_ordering(18) 00:14:09.466 fused_ordering(19) 00:14:09.466 fused_ordering(20) 00:14:09.466 fused_ordering(21) 00:14:09.466 fused_ordering(22) 00:14:09.466 fused_ordering(23) 00:14:09.466 fused_ordering(24) 00:14:09.466 fused_ordering(25) 00:14:09.466 fused_ordering(26) 00:14:09.466 fused_ordering(27) 00:14:09.466 fused_ordering(28) 00:14:09.466 fused_ordering(29) 00:14:09.466 fused_ordering(30) 00:14:09.466 fused_ordering(31) 00:14:09.466 fused_ordering(32) 00:14:09.466 fused_ordering(33) 00:14:09.466 fused_ordering(34) 00:14:09.466 fused_ordering(35) 00:14:09.466 fused_ordering(36) 00:14:09.466 fused_ordering(37) 00:14:09.466 fused_ordering(38) 00:14:09.466 fused_ordering(39) 00:14:09.466 fused_ordering(40) 00:14:09.466 fused_ordering(41) 00:14:09.466 fused_ordering(42) 00:14:09.466 fused_ordering(43) 00:14:09.466 fused_ordering(44) 00:14:09.466 fused_ordering(45) 00:14:09.466 fused_ordering(46) 00:14:09.466 fused_ordering(47) 00:14:09.466 fused_ordering(48) 00:14:09.466 fused_ordering(49) 00:14:09.466 fused_ordering(50) 00:14:09.466 fused_ordering(51) 00:14:09.466 fused_ordering(52) 00:14:09.466 fused_ordering(53) 00:14:09.466 fused_ordering(54) 00:14:09.466 fused_ordering(55) 00:14:09.466 fused_ordering(56) 00:14:09.466 fused_ordering(57) 00:14:09.466 fused_ordering(58) 00:14:09.466 fused_ordering(59) 00:14:09.466 fused_ordering(60) 00:14:09.466 fused_ordering(61) 00:14:09.466 fused_ordering(62) 00:14:09.466 fused_ordering(63) 00:14:09.466 fused_ordering(64) 00:14:09.466 fused_ordering(65) 00:14:09.466 fused_ordering(66) 00:14:09.466 fused_ordering(67) 00:14:09.466 fused_ordering(68) 00:14:09.466 fused_ordering(69) 00:14:09.466 fused_ordering(70) 00:14:09.466 fused_ordering(71) 00:14:09.466 fused_ordering(72) 00:14:09.466 fused_ordering(73) 00:14:09.466 fused_ordering(74) 00:14:09.466 fused_ordering(75) 00:14:09.466 fused_ordering(76) 00:14:09.466 fused_ordering(77) 00:14:09.466 fused_ordering(78) 00:14:09.466 fused_ordering(79) 00:14:09.466 fused_ordering(80) 00:14:09.466 fused_ordering(81) 00:14:09.466 fused_ordering(82) 00:14:09.466 fused_ordering(83) 00:14:09.466 fused_ordering(84) 00:14:09.466 fused_ordering(85) 00:14:09.466 fused_ordering(86) 00:14:09.466 fused_ordering(87) 00:14:09.466 fused_ordering(88) 00:14:09.466 fused_ordering(89) 00:14:09.466 fused_ordering(90) 00:14:09.466 fused_ordering(91) 00:14:09.466 fused_ordering(92) 00:14:09.466 fused_ordering(93) 00:14:09.466 fused_ordering(94) 00:14:09.466 fused_ordering(95) 00:14:09.466 fused_ordering(96) 00:14:09.466 fused_ordering(97) 00:14:09.466 fused_ordering(98) 00:14:09.466 fused_ordering(99) 00:14:09.466 fused_ordering(100) 00:14:09.466 fused_ordering(101) 00:14:09.466 fused_ordering(102) 00:14:09.466 fused_ordering(103) 00:14:09.466 fused_ordering(104) 00:14:09.466 fused_ordering(105) 00:14:09.466 fused_ordering(106) 00:14:09.466 fused_ordering(107) 00:14:09.466 fused_ordering(108) 00:14:09.466 fused_ordering(109) 00:14:09.466 fused_ordering(110) 00:14:09.466 fused_ordering(111) 00:14:09.466 fused_ordering(112) 00:14:09.466 fused_ordering(113) 00:14:09.466 fused_ordering(114) 00:14:09.466 fused_ordering(115) 00:14:09.466 fused_ordering(116) 00:14:09.466 fused_ordering(117) 00:14:09.466 fused_ordering(118) 00:14:09.466 fused_ordering(119) 00:14:09.466 fused_ordering(120) 00:14:09.466 fused_ordering(121) 00:14:09.466 fused_ordering(122) 00:14:09.466 fused_ordering(123) 00:14:09.466 fused_ordering(124) 00:14:09.466 fused_ordering(125) 00:14:09.466 fused_ordering(126) 00:14:09.466 fused_ordering(127) 00:14:09.466 fused_ordering(128) 00:14:09.466 fused_ordering(129) 00:14:09.466 fused_ordering(130) 00:14:09.466 fused_ordering(131) 00:14:09.466 fused_ordering(132) 00:14:09.466 fused_ordering(133) 00:14:09.466 fused_ordering(134) 00:14:09.466 fused_ordering(135) 00:14:09.466 fused_ordering(136) 00:14:09.466 fused_ordering(137) 00:14:09.466 fused_ordering(138) 00:14:09.466 fused_ordering(139) 00:14:09.466 fused_ordering(140) 00:14:09.466 fused_ordering(141) 00:14:09.466 fused_ordering(142) 00:14:09.466 fused_ordering(143) 00:14:09.466 fused_ordering(144) 00:14:09.466 fused_ordering(145) 00:14:09.466 fused_ordering(146) 00:14:09.466 fused_ordering(147) 00:14:09.466 fused_ordering(148) 00:14:09.466 fused_ordering(149) 00:14:09.466 fused_ordering(150) 00:14:09.466 fused_ordering(151) 00:14:09.466 fused_ordering(152) 00:14:09.467 fused_ordering(153) 00:14:09.467 fused_ordering(154) 00:14:09.467 fused_ordering(155) 00:14:09.467 fused_ordering(156) 00:14:09.467 fused_ordering(157) 00:14:09.467 fused_ordering(158) 00:14:09.467 fused_ordering(159) 00:14:09.467 fused_ordering(160) 00:14:09.467 fused_ordering(161) 00:14:09.467 fused_ordering(162) 00:14:09.467 fused_ordering(163) 00:14:09.467 fused_ordering(164) 00:14:09.467 fused_ordering(165) 00:14:09.467 fused_ordering(166) 00:14:09.467 fused_ordering(167) 00:14:09.467 fused_ordering(168) 00:14:09.467 fused_ordering(169) 00:14:09.467 fused_ordering(170) 00:14:09.467 fused_ordering(171) 00:14:09.467 fused_ordering(172) 00:14:09.467 fused_ordering(173) 00:14:09.467 fused_ordering(174) 00:14:09.467 fused_ordering(175) 00:14:09.467 fused_ordering(176) 00:14:09.467 fused_ordering(177) 00:14:09.467 fused_ordering(178) 00:14:09.467 fused_ordering(179) 00:14:09.467 fused_ordering(180) 00:14:09.467 fused_ordering(181) 00:14:09.467 fused_ordering(182) 00:14:09.467 fused_ordering(183) 00:14:09.467 fused_ordering(184) 00:14:09.467 fused_ordering(185) 00:14:09.467 fused_ordering(186) 00:14:09.467 fused_ordering(187) 00:14:09.467 fused_ordering(188) 00:14:09.467 fused_ordering(189) 00:14:09.467 fused_ordering(190) 00:14:09.467 fused_ordering(191) 00:14:09.467 fused_ordering(192) 00:14:09.467 fused_ordering(193) 00:14:09.467 fused_ordering(194) 00:14:09.467 fused_ordering(195) 00:14:09.467 fused_ordering(196) 00:14:09.467 fused_ordering(197) 00:14:09.467 fused_ordering(198) 00:14:09.467 fused_ordering(199) 00:14:09.467 fused_ordering(200) 00:14:09.467 fused_ordering(201) 00:14:09.467 fused_ordering(202) 00:14:09.467 fused_ordering(203) 00:14:09.467 fused_ordering(204) 00:14:09.467 fused_ordering(205) 00:14:09.724 fused_ordering(206) 00:14:09.724 fused_ordering(207) 00:14:09.724 fused_ordering(208) 00:14:09.724 fused_ordering(209) 00:14:09.724 fused_ordering(210) 00:14:09.724 fused_ordering(211) 00:14:09.724 fused_ordering(212) 00:14:09.724 fused_ordering(213) 00:14:09.724 fused_ordering(214) 00:14:09.724 fused_ordering(215) 00:14:09.724 fused_ordering(216) 00:14:09.724 fused_ordering(217) 00:14:09.724 fused_ordering(218) 00:14:09.724 fused_ordering(219) 00:14:09.724 fused_ordering(220) 00:14:09.724 fused_ordering(221) 00:14:09.724 fused_ordering(222) 00:14:09.724 fused_ordering(223) 00:14:09.724 fused_ordering(224) 00:14:09.724 fused_ordering(225) 00:14:09.724 fused_ordering(226) 00:14:09.724 fused_ordering(227) 00:14:09.724 fused_ordering(228) 00:14:09.724 fused_ordering(229) 00:14:09.724 fused_ordering(230) 00:14:09.724 fused_ordering(231) 00:14:09.724 fused_ordering(232) 00:14:09.724 fused_ordering(233) 00:14:09.724 fused_ordering(234) 00:14:09.724 fused_ordering(235) 00:14:09.724 fused_ordering(236) 00:14:09.724 fused_ordering(237) 00:14:09.725 fused_ordering(238) 00:14:09.725 fused_ordering(239) 00:14:09.725 fused_ordering(240) 00:14:09.725 fused_ordering(241) 00:14:09.725 fused_ordering(242) 00:14:09.725 fused_ordering(243) 00:14:09.725 fused_ordering(244) 00:14:09.725 fused_ordering(245) 00:14:09.725 fused_ordering(246) 00:14:09.725 fused_ordering(247) 00:14:09.725 fused_ordering(248) 00:14:09.725 fused_ordering(249) 00:14:09.725 fused_ordering(250) 00:14:09.725 fused_ordering(251) 00:14:09.725 fused_ordering(252) 00:14:09.725 fused_ordering(253) 00:14:09.725 fused_ordering(254) 00:14:09.725 fused_ordering(255) 00:14:09.725 fused_ordering(256) 00:14:09.725 fused_ordering(257) 00:14:09.725 fused_ordering(258) 00:14:09.725 fused_ordering(259) 00:14:09.725 fused_ordering(260) 00:14:09.725 fused_ordering(261) 00:14:09.725 fused_ordering(262) 00:14:09.725 fused_ordering(263) 00:14:09.725 fused_ordering(264) 00:14:09.725 fused_ordering(265) 00:14:09.725 fused_ordering(266) 00:14:09.725 fused_ordering(267) 00:14:09.725 fused_ordering(268) 00:14:09.725 fused_ordering(269) 00:14:09.725 fused_ordering(270) 00:14:09.725 fused_ordering(271) 00:14:09.725 fused_ordering(272) 00:14:09.725 fused_ordering(273) 00:14:09.725 fused_ordering(274) 00:14:09.725 fused_ordering(275) 00:14:09.725 fused_ordering(276) 00:14:09.725 fused_ordering(277) 00:14:09.725 fused_ordering(278) 00:14:09.725 fused_ordering(279) 00:14:09.725 fused_ordering(280) 00:14:09.725 fused_ordering(281) 00:14:09.725 fused_ordering(282) 00:14:09.725 fused_ordering(283) 00:14:09.725 fused_ordering(284) 00:14:09.725 fused_ordering(285) 00:14:09.725 fused_ordering(286) 00:14:09.725 fused_ordering(287) 00:14:09.725 fused_ordering(288) 00:14:09.725 fused_ordering(289) 00:14:09.725 fused_ordering(290) 00:14:09.725 fused_ordering(291) 00:14:09.725 fused_ordering(292) 00:14:09.725 fused_ordering(293) 00:14:09.725 fused_ordering(294) 00:14:09.725 fused_ordering(295) 00:14:09.725 fused_ordering(296) 00:14:09.725 fused_ordering(297) 00:14:09.725 fused_ordering(298) 00:14:09.725 fused_ordering(299) 00:14:09.725 fused_ordering(300) 00:14:09.725 fused_ordering(301) 00:14:09.725 fused_ordering(302) 00:14:09.725 fused_ordering(303) 00:14:09.725 fused_ordering(304) 00:14:09.725 fused_ordering(305) 00:14:09.725 fused_ordering(306) 00:14:09.725 fused_ordering(307) 00:14:09.725 fused_ordering(308) 00:14:09.725 fused_ordering(309) 00:14:09.725 fused_ordering(310) 00:14:09.725 fused_ordering(311) 00:14:09.725 fused_ordering(312) 00:14:09.725 fused_ordering(313) 00:14:09.725 fused_ordering(314) 00:14:09.725 fused_ordering(315) 00:14:09.725 fused_ordering(316) 00:14:09.725 fused_ordering(317) 00:14:09.725 fused_ordering(318) 00:14:09.725 fused_ordering(319) 00:14:09.725 fused_ordering(320) 00:14:09.725 fused_ordering(321) 00:14:09.725 fused_ordering(322) 00:14:09.725 fused_ordering(323) 00:14:09.725 fused_ordering(324) 00:14:09.725 fused_ordering(325) 00:14:09.725 fused_ordering(326) 00:14:09.725 fused_ordering(327) 00:14:09.725 fused_ordering(328) 00:14:09.725 fused_ordering(329) 00:14:09.725 fused_ordering(330) 00:14:09.725 fused_ordering(331) 00:14:09.725 fused_ordering(332) 00:14:09.725 fused_ordering(333) 00:14:09.725 fused_ordering(334) 00:14:09.725 fused_ordering(335) 00:14:09.725 fused_ordering(336) 00:14:09.725 fused_ordering(337) 00:14:09.725 fused_ordering(338) 00:14:09.725 fused_ordering(339) 00:14:09.725 fused_ordering(340) 00:14:09.725 fused_ordering(341) 00:14:09.725 fused_ordering(342) 00:14:09.725 fused_ordering(343) 00:14:09.725 fused_ordering(344) 00:14:09.725 fused_ordering(345) 00:14:09.725 fused_ordering(346) 00:14:09.725 fused_ordering(347) 00:14:09.725 fused_ordering(348) 00:14:09.725 fused_ordering(349) 00:14:09.725 fused_ordering(350) 00:14:09.725 fused_ordering(351) 00:14:09.725 fused_ordering(352) 00:14:09.725 fused_ordering(353) 00:14:09.725 fused_ordering(354) 00:14:09.725 fused_ordering(355) 00:14:09.725 fused_ordering(356) 00:14:09.725 fused_ordering(357) 00:14:09.725 fused_ordering(358) 00:14:09.725 fused_ordering(359) 00:14:09.725 fused_ordering(360) 00:14:09.725 fused_ordering(361) 00:14:09.725 fused_ordering(362) 00:14:09.725 fused_ordering(363) 00:14:09.725 fused_ordering(364) 00:14:09.725 fused_ordering(365) 00:14:09.725 fused_ordering(366) 00:14:09.725 fused_ordering(367) 00:14:09.725 fused_ordering(368) 00:14:09.725 fused_ordering(369) 00:14:09.725 fused_ordering(370) 00:14:09.725 fused_ordering(371) 00:14:09.725 fused_ordering(372) 00:14:09.725 fused_ordering(373) 00:14:09.725 fused_ordering(374) 00:14:09.725 fused_ordering(375) 00:14:09.725 fused_ordering(376) 00:14:09.725 fused_ordering(377) 00:14:09.725 fused_ordering(378) 00:14:09.725 fused_ordering(379) 00:14:09.725 fused_ordering(380) 00:14:09.725 fused_ordering(381) 00:14:09.725 fused_ordering(382) 00:14:09.725 fused_ordering(383) 00:14:09.725 fused_ordering(384) 00:14:09.725 fused_ordering(385) 00:14:09.725 fused_ordering(386) 00:14:09.725 fused_ordering(387) 00:14:09.725 fused_ordering(388) 00:14:09.725 fused_ordering(389) 00:14:09.725 fused_ordering(390) 00:14:09.725 fused_ordering(391) 00:14:09.725 fused_ordering(392) 00:14:09.725 fused_ordering(393) 00:14:09.725 fused_ordering(394) 00:14:09.725 fused_ordering(395) 00:14:09.725 fused_ordering(396) 00:14:09.725 fused_ordering(397) 00:14:09.725 fused_ordering(398) 00:14:09.725 fused_ordering(399) 00:14:09.725 fused_ordering(400) 00:14:09.725 fused_ordering(401) 00:14:09.725 fused_ordering(402) 00:14:09.725 fused_ordering(403) 00:14:09.725 fused_ordering(404) 00:14:09.725 fused_ordering(405) 00:14:09.725 fused_ordering(406) 00:14:09.725 fused_ordering(407) 00:14:09.725 fused_ordering(408) 00:14:09.725 fused_ordering(409) 00:14:09.725 fused_ordering(410) 00:14:10.291 fused_ordering(411) 00:14:10.291 fused_ordering(412) 00:14:10.291 fused_ordering(413) 00:14:10.291 fused_ordering(414) 00:14:10.291 fused_ordering(415) 00:14:10.291 fused_ordering(416) 00:14:10.291 fused_ordering(417) 00:14:10.291 fused_ordering(418) 00:14:10.291 fused_ordering(419) 00:14:10.291 fused_ordering(420) 00:14:10.291 fused_ordering(421) 00:14:10.291 fused_ordering(422) 00:14:10.291 fused_ordering(423) 00:14:10.291 fused_ordering(424) 00:14:10.291 fused_ordering(425) 00:14:10.291 fused_ordering(426) 00:14:10.291 fused_ordering(427) 00:14:10.291 fused_ordering(428) 00:14:10.291 fused_ordering(429) 00:14:10.291 fused_ordering(430) 00:14:10.291 fused_ordering(431) 00:14:10.291 fused_ordering(432) 00:14:10.291 fused_ordering(433) 00:14:10.291 fused_ordering(434) 00:14:10.291 fused_ordering(435) 00:14:10.291 fused_ordering(436) 00:14:10.291 fused_ordering(437) 00:14:10.291 fused_ordering(438) 00:14:10.291 fused_ordering(439) 00:14:10.291 fused_ordering(440) 00:14:10.291 fused_ordering(441) 00:14:10.291 fused_ordering(442) 00:14:10.291 fused_ordering(443) 00:14:10.291 fused_ordering(444) 00:14:10.291 fused_ordering(445) 00:14:10.291 fused_ordering(446) 00:14:10.291 fused_ordering(447) 00:14:10.291 fused_ordering(448) 00:14:10.291 fused_ordering(449) 00:14:10.291 fused_ordering(450) 00:14:10.291 fused_ordering(451) 00:14:10.291 fused_ordering(452) 00:14:10.291 fused_ordering(453) 00:14:10.291 fused_ordering(454) 00:14:10.291 fused_ordering(455) 00:14:10.291 fused_ordering(456) 00:14:10.291 fused_ordering(457) 00:14:10.291 fused_ordering(458) 00:14:10.291 fused_ordering(459) 00:14:10.291 fused_ordering(460) 00:14:10.291 fused_ordering(461) 00:14:10.291 fused_ordering(462) 00:14:10.291 fused_ordering(463) 00:14:10.291 fused_ordering(464) 00:14:10.291 fused_ordering(465) 00:14:10.291 fused_ordering(466) 00:14:10.291 fused_ordering(467) 00:14:10.291 fused_ordering(468) 00:14:10.291 fused_ordering(469) 00:14:10.291 fused_ordering(470) 00:14:10.291 fused_ordering(471) 00:14:10.291 fused_ordering(472) 00:14:10.291 fused_ordering(473) 00:14:10.291 fused_ordering(474) 00:14:10.291 fused_ordering(475) 00:14:10.291 fused_ordering(476) 00:14:10.291 fused_ordering(477) 00:14:10.291 fused_ordering(478) 00:14:10.291 fused_ordering(479) 00:14:10.291 fused_ordering(480) 00:14:10.291 fused_ordering(481) 00:14:10.291 fused_ordering(482) 00:14:10.291 fused_ordering(483) 00:14:10.291 fused_ordering(484) 00:14:10.291 fused_ordering(485) 00:14:10.291 fused_ordering(486) 00:14:10.291 fused_ordering(487) 00:14:10.291 fused_ordering(488) 00:14:10.291 fused_ordering(489) 00:14:10.291 fused_ordering(490) 00:14:10.291 fused_ordering(491) 00:14:10.291 fused_ordering(492) 00:14:10.291 fused_ordering(493) 00:14:10.291 fused_ordering(494) 00:14:10.291 fused_ordering(495) 00:14:10.291 fused_ordering(496) 00:14:10.291 fused_ordering(497) 00:14:10.291 fused_ordering(498) 00:14:10.291 fused_ordering(499) 00:14:10.291 fused_ordering(500) 00:14:10.291 fused_ordering(501) 00:14:10.291 fused_ordering(502) 00:14:10.291 fused_ordering(503) 00:14:10.291 fused_ordering(504) 00:14:10.291 fused_ordering(505) 00:14:10.291 fused_ordering(506) 00:14:10.291 fused_ordering(507) 00:14:10.291 fused_ordering(508) 00:14:10.291 fused_ordering(509) 00:14:10.291 fused_ordering(510) 00:14:10.291 fused_ordering(511) 00:14:10.291 fused_ordering(512) 00:14:10.291 fused_ordering(513) 00:14:10.291 fused_ordering(514) 00:14:10.291 fused_ordering(515) 00:14:10.291 fused_ordering(516) 00:14:10.291 fused_ordering(517) 00:14:10.291 fused_ordering(518) 00:14:10.291 fused_ordering(519) 00:14:10.291 fused_ordering(520) 00:14:10.291 fused_ordering(521) 00:14:10.291 fused_ordering(522) 00:14:10.291 fused_ordering(523) 00:14:10.291 fused_ordering(524) 00:14:10.291 fused_ordering(525) 00:14:10.291 fused_ordering(526) 00:14:10.291 fused_ordering(527) 00:14:10.291 fused_ordering(528) 00:14:10.291 fused_ordering(529) 00:14:10.291 fused_ordering(530) 00:14:10.291 fused_ordering(531) 00:14:10.291 fused_ordering(532) 00:14:10.291 fused_ordering(533) 00:14:10.291 fused_ordering(534) 00:14:10.291 fused_ordering(535) 00:14:10.291 fused_ordering(536) 00:14:10.291 fused_ordering(537) 00:14:10.291 fused_ordering(538) 00:14:10.291 fused_ordering(539) 00:14:10.291 fused_ordering(540) 00:14:10.291 fused_ordering(541) 00:14:10.291 fused_ordering(542) 00:14:10.291 fused_ordering(543) 00:14:10.291 fused_ordering(544) 00:14:10.291 fused_ordering(545) 00:14:10.291 fused_ordering(546) 00:14:10.291 fused_ordering(547) 00:14:10.291 fused_ordering(548) 00:14:10.291 fused_ordering(549) 00:14:10.291 fused_ordering(550) 00:14:10.291 fused_ordering(551) 00:14:10.291 fused_ordering(552) 00:14:10.291 fused_ordering(553) 00:14:10.291 fused_ordering(554) 00:14:10.291 fused_ordering(555) 00:14:10.291 fused_ordering(556) 00:14:10.291 fused_ordering(557) 00:14:10.291 fused_ordering(558) 00:14:10.291 fused_ordering(559) 00:14:10.291 fused_ordering(560) 00:14:10.291 fused_ordering(561) 00:14:10.291 fused_ordering(562) 00:14:10.291 fused_ordering(563) 00:14:10.291 fused_ordering(564) 00:14:10.291 fused_ordering(565) 00:14:10.291 fused_ordering(566) 00:14:10.291 fused_ordering(567) 00:14:10.291 fused_ordering(568) 00:14:10.291 fused_ordering(569) 00:14:10.291 fused_ordering(570) 00:14:10.291 fused_ordering(571) 00:14:10.291 fused_ordering(572) 00:14:10.291 fused_ordering(573) 00:14:10.291 fused_ordering(574) 00:14:10.291 fused_ordering(575) 00:14:10.291 fused_ordering(576) 00:14:10.291 fused_ordering(577) 00:14:10.291 fused_ordering(578) 00:14:10.291 fused_ordering(579) 00:14:10.291 fused_ordering(580) 00:14:10.291 fused_ordering(581) 00:14:10.291 fused_ordering(582) 00:14:10.291 fused_ordering(583) 00:14:10.291 fused_ordering(584) 00:14:10.291 fused_ordering(585) 00:14:10.291 fused_ordering(586) 00:14:10.291 fused_ordering(587) 00:14:10.291 fused_ordering(588) 00:14:10.291 fused_ordering(589) 00:14:10.291 fused_ordering(590) 00:14:10.291 fused_ordering(591) 00:14:10.292 fused_ordering(592) 00:14:10.292 fused_ordering(593) 00:14:10.292 fused_ordering(594) 00:14:10.292 fused_ordering(595) 00:14:10.292 fused_ordering(596) 00:14:10.292 fused_ordering(597) 00:14:10.292 fused_ordering(598) 00:14:10.292 fused_ordering(599) 00:14:10.292 fused_ordering(600) 00:14:10.292 fused_ordering(601) 00:14:10.292 fused_ordering(602) 00:14:10.292 fused_ordering(603) 00:14:10.292 fused_ordering(604) 00:14:10.292 fused_ordering(605) 00:14:10.292 fused_ordering(606) 00:14:10.292 fused_ordering(607) 00:14:10.292 fused_ordering(608) 00:14:10.292 fused_ordering(609) 00:14:10.292 fused_ordering(610) 00:14:10.292 fused_ordering(611) 00:14:10.292 fused_ordering(612) 00:14:10.292 fused_ordering(613) 00:14:10.292 fused_ordering(614) 00:14:10.292 fused_ordering(615) 00:14:10.550 fused_ordering(616) 00:14:10.550 fused_ordering(617) 00:14:10.550 fused_ordering(618) 00:14:10.550 fused_ordering(619) 00:14:10.550 fused_ordering(620) 00:14:10.550 fused_ordering(621) 00:14:10.550 fused_ordering(622) 00:14:10.550 fused_ordering(623) 00:14:10.550 fused_ordering(624) 00:14:10.550 fused_ordering(625) 00:14:10.550 fused_ordering(626) 00:14:10.550 fused_ordering(627) 00:14:10.550 fused_ordering(628) 00:14:10.550 fused_ordering(629) 00:14:10.550 fused_ordering(630) 00:14:10.550 fused_ordering(631) 00:14:10.550 fused_ordering(632) 00:14:10.550 fused_ordering(633) 00:14:10.550 fused_ordering(634) 00:14:10.550 fused_ordering(635) 00:14:10.550 fused_ordering(636) 00:14:10.550 fused_ordering(637) 00:14:10.550 fused_ordering(638) 00:14:10.550 fused_ordering(639) 00:14:10.550 fused_ordering(640) 00:14:10.550 fused_ordering(641) 00:14:10.550 fused_ordering(642) 00:14:10.550 fused_ordering(643) 00:14:10.550 fused_ordering(644) 00:14:10.550 fused_ordering(645) 00:14:10.550 fused_ordering(646) 00:14:10.550 fused_ordering(647) 00:14:10.550 fused_ordering(648) 00:14:10.550 fused_ordering(649) 00:14:10.550 fused_ordering(650) 00:14:10.550 fused_ordering(651) 00:14:10.550 fused_ordering(652) 00:14:10.550 fused_ordering(653) 00:14:10.550 fused_ordering(654) 00:14:10.550 fused_ordering(655) 00:14:10.550 fused_ordering(656) 00:14:10.550 fused_ordering(657) 00:14:10.550 fused_ordering(658) 00:14:10.550 fused_ordering(659) 00:14:10.550 fused_ordering(660) 00:14:10.550 fused_ordering(661) 00:14:10.550 fused_ordering(662) 00:14:10.550 fused_ordering(663) 00:14:10.550 fused_ordering(664) 00:14:10.550 fused_ordering(665) 00:14:10.550 fused_ordering(666) 00:14:10.550 fused_ordering(667) 00:14:10.550 fused_ordering(668) 00:14:10.550 fused_ordering(669) 00:14:10.550 fused_ordering(670) 00:14:10.550 fused_ordering(671) 00:14:10.550 fused_ordering(672) 00:14:10.550 fused_ordering(673) 00:14:10.550 fused_ordering(674) 00:14:10.550 fused_ordering(675) 00:14:10.550 fused_ordering(676) 00:14:10.550 fused_ordering(677) 00:14:10.550 fused_ordering(678) 00:14:10.550 fused_ordering(679) 00:14:10.550 fused_ordering(680) 00:14:10.550 fused_ordering(681) 00:14:10.550 fused_ordering(682) 00:14:10.550 fused_ordering(683) 00:14:10.550 fused_ordering(684) 00:14:10.550 fused_ordering(685) 00:14:10.551 fused_ordering(686) 00:14:10.551 fused_ordering(687) 00:14:10.551 fused_ordering(688) 00:14:10.551 fused_ordering(689) 00:14:10.551 fused_ordering(690) 00:14:10.551 fused_ordering(691) 00:14:10.551 fused_ordering(692) 00:14:10.551 fused_ordering(693) 00:14:10.551 fused_ordering(694) 00:14:10.551 fused_ordering(695) 00:14:10.551 fused_ordering(696) 00:14:10.551 fused_ordering(697) 00:14:10.551 fused_ordering(698) 00:14:10.551 fused_ordering(699) 00:14:10.551 fused_ordering(700) 00:14:10.551 fused_ordering(701) 00:14:10.551 fused_ordering(702) 00:14:10.551 fused_ordering(703) 00:14:10.551 fused_ordering(704) 00:14:10.551 fused_ordering(705) 00:14:10.551 fused_ordering(706) 00:14:10.551 fused_ordering(707) 00:14:10.551 fused_ordering(708) 00:14:10.551 fused_ordering(709) 00:14:10.551 fused_ordering(710) 00:14:10.551 fused_ordering(711) 00:14:10.551 fused_ordering(712) 00:14:10.551 fused_ordering(713) 00:14:10.551 fused_ordering(714) 00:14:10.551 fused_ordering(715) 00:14:10.551 fused_ordering(716) 00:14:10.551 fused_ordering(717) 00:14:10.551 fused_ordering(718) 00:14:10.551 fused_ordering(719) 00:14:10.551 fused_ordering(720) 00:14:10.551 fused_ordering(721) 00:14:10.551 fused_ordering(722) 00:14:10.551 fused_ordering(723) 00:14:10.551 fused_ordering(724) 00:14:10.551 fused_ordering(725) 00:14:10.551 fused_ordering(726) 00:14:10.551 fused_ordering(727) 00:14:10.551 fused_ordering(728) 00:14:10.551 fused_ordering(729) 00:14:10.551 fused_ordering(730) 00:14:10.551 fused_ordering(731) 00:14:10.551 fused_ordering(732) 00:14:10.551 fused_ordering(733) 00:14:10.551 fused_ordering(734) 00:14:10.551 fused_ordering(735) 00:14:10.551 fused_ordering(736) 00:14:10.551 fused_ordering(737) 00:14:10.551 fused_ordering(738) 00:14:10.551 fused_ordering(739) 00:14:10.551 fused_ordering(740) 00:14:10.551 fused_ordering(741) 00:14:10.551 fused_ordering(742) 00:14:10.551 fused_ordering(743) 00:14:10.551 fused_ordering(744) 00:14:10.551 fused_ordering(745) 00:14:10.551 fused_ordering(746) 00:14:10.551 fused_ordering(747) 00:14:10.551 fused_ordering(748) 00:14:10.551 fused_ordering(749) 00:14:10.551 fused_ordering(750) 00:14:10.551 fused_ordering(751) 00:14:10.551 fused_ordering(752) 00:14:10.551 fused_ordering(753) 00:14:10.551 fused_ordering(754) 00:14:10.551 fused_ordering(755) 00:14:10.551 fused_ordering(756) 00:14:10.551 fused_ordering(757) 00:14:10.551 fused_ordering(758) 00:14:10.551 fused_ordering(759) 00:14:10.551 fused_ordering(760) 00:14:10.551 fused_ordering(761) 00:14:10.551 fused_ordering(762) 00:14:10.551 fused_ordering(763) 00:14:10.551 fused_ordering(764) 00:14:10.551 fused_ordering(765) 00:14:10.551 fused_ordering(766) 00:14:10.551 fused_ordering(767) 00:14:10.551 fused_ordering(768) 00:14:10.551 fused_ordering(769) 00:14:10.551 fused_ordering(770) 00:14:10.551 fused_ordering(771) 00:14:10.551 fused_ordering(772) 00:14:10.551 fused_ordering(773) 00:14:10.551 fused_ordering(774) 00:14:10.551 fused_ordering(775) 00:14:10.551 fused_ordering(776) 00:14:10.551 fused_ordering(777) 00:14:10.551 fused_ordering(778) 00:14:10.551 fused_ordering(779) 00:14:10.551 fused_ordering(780) 00:14:10.551 fused_ordering(781) 00:14:10.551 fused_ordering(782) 00:14:10.551 fused_ordering(783) 00:14:10.551 fused_ordering(784) 00:14:10.551 fused_ordering(785) 00:14:10.551 fused_ordering(786) 00:14:10.551 fused_ordering(787) 00:14:10.551 fused_ordering(788) 00:14:10.551 fused_ordering(789) 00:14:10.551 fused_ordering(790) 00:14:10.551 fused_ordering(791) 00:14:10.551 fused_ordering(792) 00:14:10.551 fused_ordering(793) 00:14:10.551 fused_ordering(794) 00:14:10.551 fused_ordering(795) 00:14:10.551 fused_ordering(796) 00:14:10.551 fused_ordering(797) 00:14:10.551 fused_ordering(798) 00:14:10.551 fused_ordering(799) 00:14:10.551 fused_ordering(800) 00:14:10.551 fused_ordering(801) 00:14:10.551 fused_ordering(802) 00:14:10.551 fused_ordering(803) 00:14:10.551 fused_ordering(804) 00:14:10.551 fused_ordering(805) 00:14:10.551 fused_ordering(806) 00:14:10.551 fused_ordering(807) 00:14:10.551 fused_ordering(808) 00:14:10.551 fused_ordering(809) 00:14:10.551 fused_ordering(810) 00:14:10.551 fused_ordering(811) 00:14:10.551 fused_ordering(812) 00:14:10.551 fused_ordering(813) 00:14:10.551 fused_ordering(814) 00:14:10.551 fused_ordering(815) 00:14:10.551 fused_ordering(816) 00:14:10.551 fused_ordering(817) 00:14:10.551 fused_ordering(818) 00:14:10.551 fused_ordering(819) 00:14:10.551 fused_ordering(820) 00:14:11.486 fused_ordering(821) 00:14:11.486 fused_ordering(822) 00:14:11.486 fused_ordering(823) 00:14:11.486 fused_ordering(824) 00:14:11.486 fused_ordering(825) 00:14:11.486 fused_ordering(826) 00:14:11.486 fused_ordering(827) 00:14:11.486 fused_ordering(828) 00:14:11.486 fused_ordering(829) 00:14:11.487 fused_ordering(830) 00:14:11.487 fused_ordering(831) 00:14:11.487 fused_ordering(832) 00:14:11.487 fused_ordering(833) 00:14:11.487 fused_ordering(834) 00:14:11.487 fused_ordering(835) 00:14:11.487 fused_ordering(836) 00:14:11.487 fused_ordering(837) 00:14:11.487 fused_ordering(838) 00:14:11.487 fused_ordering(839) 00:14:11.487 fused_ordering(840) 00:14:11.487 fused_ordering(841) 00:14:11.487 fused_ordering(842) 00:14:11.487 fused_ordering(843) 00:14:11.487 fused_ordering(844) 00:14:11.487 fused_ordering(845) 00:14:11.487 fused_ordering(846) 00:14:11.487 fused_ordering(847) 00:14:11.487 fused_ordering(848) 00:14:11.487 fused_ordering(849) 00:14:11.487 fused_ordering(850) 00:14:11.487 fused_ordering(851) 00:14:11.487 fused_ordering(852) 00:14:11.487 fused_ordering(853) 00:14:11.487 fused_ordering(854) 00:14:11.487 fused_ordering(855) 00:14:11.487 fused_ordering(856) 00:14:11.487 fused_ordering(857) 00:14:11.487 fused_ordering(858) 00:14:11.487 fused_ordering(859) 00:14:11.487 fused_ordering(860) 00:14:11.487 fused_ordering(861) 00:14:11.487 fused_ordering(862) 00:14:11.487 fused_ordering(863) 00:14:11.487 fused_ordering(864) 00:14:11.487 fused_ordering(865) 00:14:11.487 fused_ordering(866) 00:14:11.487 fused_ordering(867) 00:14:11.487 fused_ordering(868) 00:14:11.487 fused_ordering(869) 00:14:11.487 fused_ordering(870) 00:14:11.487 fused_ordering(871) 00:14:11.487 fused_ordering(872) 00:14:11.487 fused_ordering(873) 00:14:11.487 fused_ordering(874) 00:14:11.487 fused_ordering(875) 00:14:11.487 fused_ordering(876) 00:14:11.487 fused_ordering(877) 00:14:11.487 fused_ordering(878) 00:14:11.487 fused_ordering(879) 00:14:11.487 fused_ordering(880) 00:14:11.487 fused_ordering(881) 00:14:11.487 fused_ordering(882) 00:14:11.487 fused_ordering(883) 00:14:11.487 fused_ordering(884) 00:14:11.487 fused_ordering(885) 00:14:11.487 fused_ordering(886) 00:14:11.487 fused_ordering(887) 00:14:11.487 fused_ordering(888) 00:14:11.487 fused_ordering(889) 00:14:11.487 fused_ordering(890) 00:14:11.487 fused_ordering(891) 00:14:11.487 fused_ordering(892) 00:14:11.487 fused_ordering(893) 00:14:11.487 fused_ordering(894) 00:14:11.487 fused_ordering(895) 00:14:11.487 fused_ordering(896) 00:14:11.487 fused_ordering(897) 00:14:11.487 fused_ordering(898) 00:14:11.487 fused_ordering(899) 00:14:11.487 fused_ordering(900) 00:14:11.487 fused_ordering(901) 00:14:11.487 fused_ordering(902) 00:14:11.487 fused_ordering(903) 00:14:11.487 fused_ordering(904) 00:14:11.487 fused_ordering(905) 00:14:11.487 fused_ordering(906) 00:14:11.487 fused_ordering(907) 00:14:11.487 fused_ordering(908) 00:14:11.487 fused_ordering(909) 00:14:11.487 fused_ordering(910) 00:14:11.487 fused_ordering(911) 00:14:11.487 fused_ordering(912) 00:14:11.487 fused_ordering(913) 00:14:11.487 fused_ordering(914) 00:14:11.487 fused_ordering(915) 00:14:11.487 fused_ordering(916) 00:14:11.487 fused_ordering(917) 00:14:11.487 fused_ordering(918) 00:14:11.487 fused_ordering(919) 00:14:11.487 fused_ordering(920) 00:14:11.487 fused_ordering(921) 00:14:11.487 fused_ordering(922) 00:14:11.487 fused_ordering(923) 00:14:11.487 fused_ordering(924) 00:14:11.487 fused_ordering(925) 00:14:11.487 fused_ordering(926) 00:14:11.487 fused_ordering(927) 00:14:11.487 fused_ordering(928) 00:14:11.487 fused_ordering(929) 00:14:11.487 fused_ordering(930) 00:14:11.487 fused_ordering(931) 00:14:11.487 fused_ordering(932) 00:14:11.487 fused_ordering(933) 00:14:11.487 fused_ordering(934) 00:14:11.487 fused_ordering(935) 00:14:11.487 fused_ordering(936) 00:14:11.487 fused_ordering(937) 00:14:11.487 fused_ordering(938) 00:14:11.487 fused_ordering(939) 00:14:11.487 fused_ordering(940) 00:14:11.487 fused_ordering(941) 00:14:11.487 fused_ordering(942) 00:14:11.487 fused_ordering(943) 00:14:11.487 fused_ordering(944) 00:14:11.487 fused_ordering(945) 00:14:11.487 fused_ordering(946) 00:14:11.487 fused_ordering(947) 00:14:11.487 fused_ordering(948) 00:14:11.487 fused_ordering(949) 00:14:11.487 fused_ordering(950) 00:14:11.487 fused_ordering(951) 00:14:11.487 fused_ordering(952) 00:14:11.487 fused_ordering(953) 00:14:11.487 fused_ordering(954) 00:14:11.487 fused_ordering(955) 00:14:11.487 fused_ordering(956) 00:14:11.487 fused_ordering(957) 00:14:11.487 fused_ordering(958) 00:14:11.487 fused_ordering(959) 00:14:11.487 fused_ordering(960) 00:14:11.487 fused_ordering(961) 00:14:11.487 fused_ordering(962) 00:14:11.487 fused_ordering(963) 00:14:11.487 fused_ordering(964) 00:14:11.487 fused_ordering(965) 00:14:11.487 fused_ordering(966) 00:14:11.487 fused_ordering(967) 00:14:11.487 fused_ordering(968) 00:14:11.487 fused_ordering(969) 00:14:11.487 fused_ordering(970) 00:14:11.487 fused_ordering(971) 00:14:11.487 fused_ordering(972) 00:14:11.487 fused_ordering(973) 00:14:11.487 fused_ordering(974) 00:14:11.487 fused_ordering(975) 00:14:11.487 fused_ordering(976) 00:14:11.487 fused_ordering(977) 00:14:11.487 fused_ordering(978) 00:14:11.487 fused_ordering(979) 00:14:11.487 fused_ordering(980) 00:14:11.487 fused_ordering(981) 00:14:11.487 fused_ordering(982) 00:14:11.487 fused_ordering(983) 00:14:11.487 fused_ordering(984) 00:14:11.487 fused_ordering(985) 00:14:11.487 fused_ordering(986) 00:14:11.487 fused_ordering(987) 00:14:11.487 fused_ordering(988) 00:14:11.487 fused_ordering(989) 00:14:11.487 fused_ordering(990) 00:14:11.487 fused_ordering(991) 00:14:11.487 fused_ordering(992) 00:14:11.487 fused_ordering(993) 00:14:11.487 fused_ordering(994) 00:14:11.487 fused_ordering(995) 00:14:11.487 fused_ordering(996) 00:14:11.487 fused_ordering(997) 00:14:11.487 fused_ordering(998) 00:14:11.487 fused_ordering(999) 00:14:11.487 fused_ordering(1000) 00:14:11.487 fused_ordering(1001) 00:14:11.487 fused_ordering(1002) 00:14:11.487 fused_ordering(1003) 00:14:11.487 fused_ordering(1004) 00:14:11.487 fused_ordering(1005) 00:14:11.487 fused_ordering(1006) 00:14:11.487 fused_ordering(1007) 00:14:11.487 fused_ordering(1008) 00:14:11.487 fused_ordering(1009) 00:14:11.487 fused_ordering(1010) 00:14:11.487 fused_ordering(1011) 00:14:11.487 fused_ordering(1012) 00:14:11.487 fused_ordering(1013) 00:14:11.487 fused_ordering(1014) 00:14:11.487 fused_ordering(1015) 00:14:11.487 fused_ordering(1016) 00:14:11.487 fused_ordering(1017) 00:14:11.487 fused_ordering(1018) 00:14:11.487 fused_ordering(1019) 00:14:11.487 fused_ordering(1020) 00:14:11.487 fused_ordering(1021) 00:14:11.487 fused_ordering(1022) 00:14:11.487 fused_ordering(1023) 00:14:11.487 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:11.487 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:11.487 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:11.487 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:11.487 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:11.487 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:11.487 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:11.487 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:11.487 rmmod nvme_tcp 00:14:11.487 rmmod nvme_fabrics 00:14:11.487 rmmod nvme_keyring 00:14:11.487 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:11.487 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:11.487 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:11.487 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1309510 ']' 00:14:11.487 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1309510 00:14:11.487 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1309510 ']' 00:14:11.487 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1309510 00:14:11.487 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:11.487 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:11.487 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1309510 00:14:11.487 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:11.487 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:11.487 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1309510' 00:14:11.487 killing process with pid 1309510 00:14:11.487 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1309510 00:14:11.487 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1309510 00:14:11.487 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:11.488 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:11.488 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:11.488 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:11.488 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:11.750 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:11.750 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:11.750 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:11.750 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:11.750 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.750 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:11.750 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.683 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:13.683 00:14:13.683 real 0m7.683s 00:14:13.683 user 0m5.247s 00:14:13.683 sys 0m3.196s 00:14:13.683 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:13.683 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:13.683 ************************************ 00:14:13.683 END TEST nvmf_fused_ordering 00:14:13.683 ************************************ 00:14:13.683 10:42:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:13.683 10:42:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:13.683 10:42:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:13.683 10:42:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:13.683 ************************************ 00:14:13.683 START TEST nvmf_ns_masking 00:14:13.683 ************************************ 00:14:13.683 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:13.683 * Looking for test storage... 00:14:13.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:13.683 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:13.683 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:14:13.683 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:13.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.942 --rc genhtml_branch_coverage=1 00:14:13.942 --rc genhtml_function_coverage=1 00:14:13.942 --rc genhtml_legend=1 00:14:13.942 --rc geninfo_all_blocks=1 00:14:13.942 --rc geninfo_unexecuted_blocks=1 00:14:13.942 00:14:13.942 ' 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:13.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.942 --rc genhtml_branch_coverage=1 00:14:13.942 --rc genhtml_function_coverage=1 00:14:13.942 --rc genhtml_legend=1 00:14:13.942 --rc geninfo_all_blocks=1 00:14:13.942 --rc geninfo_unexecuted_blocks=1 00:14:13.942 00:14:13.942 ' 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:13.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.942 --rc genhtml_branch_coverage=1 00:14:13.942 --rc genhtml_function_coverage=1 00:14:13.942 --rc genhtml_legend=1 00:14:13.942 --rc geninfo_all_blocks=1 00:14:13.942 --rc geninfo_unexecuted_blocks=1 00:14:13.942 00:14:13.942 ' 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:13.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.942 --rc genhtml_branch_coverage=1 00:14:13.942 --rc genhtml_function_coverage=1 00:14:13.942 --rc genhtml_legend=1 00:14:13.942 --rc geninfo_all_blocks=1 00:14:13.942 --rc geninfo_unexecuted_blocks=1 00:14:13.942 00:14:13.942 ' 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.942 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:13.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=73d3d02a-bf8e-490a-842a-4579b8a84715 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=99bdad99-bd19-4562-9dde-7b2d3156d9d2 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=898eaa70-90f2-4deb-83ac-1b414787f5ba 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:13.943 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:16.477 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:16.477 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:16.477 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:16.477 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:16.477 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:16.477 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:16.477 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:16.477 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:16.478 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:16.478 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:16.478 Found net devices under 0000:09:00.0: cvl_0_0 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:16.478 Found net devices under 0000:09:00.1: cvl_0_1 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:16.478 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:16.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:14:16.479 00:14:16.479 --- 10.0.0.2 ping statistics --- 00:14:16.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.479 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:16.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:14:16.479 00:14:16.479 --- 10.0.0.1 ping statistics --- 00:14:16.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.479 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1311975 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1311975 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1311975 ']' 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:16.479 [2024-11-19 10:42:03.743616] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:14:16.479 [2024-11-19 10:42:03.743708] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.479 [2024-11-19 10:42:03.815401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.479 [2024-11-19 10:42:03.871861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.479 [2024-11-19 10:42:03.871914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.479 [2024-11-19 10:42:03.871943] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.479 [2024-11-19 10:42:03.871954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.479 [2024-11-19 10:42:03.871963] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.479 [2024-11-19 10:42:03.872557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:16.479 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:16.479 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.479 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:16.737 [2024-11-19 10:42:04.255952] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:16.737 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:16.738 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:16.738 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:16.996 Malloc1 00:14:16.996 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:17.564 Malloc2 00:14:17.564 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:17.822 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:18.079 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:18.337 [2024-11-19 10:42:05.727652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.337 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:18.337 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 898eaa70-90f2-4deb-83ac-1b414787f5ba -a 10.0.0.2 -s 4420 -i 4 00:14:18.337 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:18.337 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:18.337 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:18.337 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:18.337 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:20.861 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:20.861 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:20.861 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:20.861 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:20.861 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:20.861 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:20.861 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:20.861 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:20.861 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:20.861 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:20.861 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:20.861 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.861 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:20.861 [ 0]:0x1 00:14:20.861 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:20.861 10:42:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.861 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7e8c9088cada43bca92fd8de044df823 00:14:20.861 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7e8c9088cada43bca92fd8de044df823 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.861 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:20.861 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:20.861 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.861 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:20.861 [ 0]:0x1 00:14:20.861 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:20.861 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.861 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7e8c9088cada43bca92fd8de044df823 00:14:20.861 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7e8c9088cada43bca92fd8de044df823 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.861 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:20.861 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.861 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:20.861 [ 1]:0x2 00:14:20.861 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:20.861 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.861 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=16b7564fe8d9484eb0fadefb8e04f6b0 00:14:20.861 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 16b7564fe8d9484eb0fadefb8e04f6b0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.861 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:20.861 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:21.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.118 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.376 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:21.634 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:21.634 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 898eaa70-90f2-4deb-83ac-1b414787f5ba -a 10.0.0.2 -s 4420 -i 4 00:14:21.900 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:21.900 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:21.900 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:21.900 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:21.900 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:21.900 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:23.878 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.879 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:24.136 [ 0]:0x2 00:14:24.136 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:24.136 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.136 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=16b7564fe8d9484eb0fadefb8e04f6b0 00:14:24.136 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 16b7564fe8d9484eb0fadefb8e04f6b0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.136 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:24.394 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:24.394 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.394 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:24.394 [ 0]:0x1 00:14:24.394 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:24.394 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.394 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7e8c9088cada43bca92fd8de044df823 00:14:24.394 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7e8c9088cada43bca92fd8de044df823 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.394 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:24.394 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.394 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:24.394 [ 1]:0x2 00:14:24.394 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:24.394 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.394 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=16b7564fe8d9484eb0fadefb8e04f6b0 00:14:24.394 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 16b7564fe8d9484eb0fadefb8e04f6b0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.394 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:24.652 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:24.652 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:24.652 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:24.652 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:24.652 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:24.652 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:24.652 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:24.652 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:24.652 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.652 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:24.652 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:24.652 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.652 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:24.652 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.652 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:24.652 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:24.652 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:24.652 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:24.652 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:24.652 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.652 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:24.652 [ 0]:0x2 00:14:24.652 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:24.652 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.910 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=16b7564fe8d9484eb0fadefb8e04f6b0 00:14:24.910 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 16b7564fe8d9484eb0fadefb8e04f6b0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.910 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:24.910 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:24.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.910 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:25.168 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:25.168 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 898eaa70-90f2-4deb-83ac-1b414787f5ba -a 10.0.0.2 -s 4420 -i 4 00:14:25.425 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:25.425 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:25.426 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:25.426 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:25.426 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:25.426 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:27.322 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:27.322 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:27.322 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:27.322 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:27.322 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:27.322 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:27.322 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:27.322 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:27.580 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:27.580 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:27.580 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:27.580 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.580 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:27.580 [ 0]:0x1 00:14:27.580 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:27.580 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.580 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7e8c9088cada43bca92fd8de044df823 00:14:27.580 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7e8c9088cada43bca92fd8de044df823 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.580 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:27.580 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.580 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:27.838 [ 1]:0x2 00:14:27.838 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:27.838 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.838 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=16b7564fe8d9484eb0fadefb8e04f6b0 00:14:27.838 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 16b7564fe8d9484eb0fadefb8e04f6b0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.838 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:28.096 [ 0]:0x2 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=16b7564fe8d9484eb0fadefb8e04f6b0 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 16b7564fe8d9484eb0fadefb8e04f6b0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:28.096 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:28.354 [2024-11-19 10:42:15.946528] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:28.354 request: 00:14:28.354 { 00:14:28.354 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:28.354 "nsid": 2, 00:14:28.354 "host": "nqn.2016-06.io.spdk:host1", 00:14:28.354 "method": "nvmf_ns_remove_host", 00:14:28.354 "req_id": 1 00:14:28.354 } 00:14:28.354 Got JSON-RPC error response 00:14:28.354 response: 00:14:28.354 { 00:14:28.354 "code": -32602, 00:14:28.354 "message": "Invalid parameters" 00:14:28.354 } 00:14:28.354 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:28.354 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:28.354 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:28.354 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:28.354 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:28.354 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:28.354 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:28.354 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:28.354 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.354 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:28.354 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.354 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:28.354 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.354 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:28.612 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:28.612 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.612 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:28.612 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.612 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:28.612 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:28.612 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:28.612 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:28.612 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:28.612 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.612 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:28.612 [ 0]:0x2 00:14:28.612 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:28.612 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.612 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=16b7564fe8d9484eb0fadefb8e04f6b0 00:14:28.612 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 16b7564fe8d9484eb0fadefb8e04f6b0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.612 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:28.612 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:28.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.870 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1314107 00:14:28.870 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:28.870 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.870 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1314107 /var/tmp/host.sock 00:14:28.870 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1314107 ']' 00:14:28.870 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:28.870 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.870 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:28.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:28.870 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.870 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:28.870 [2024-11-19 10:42:16.305969] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:14:28.870 [2024-11-19 10:42:16.306056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1314107 ] 00:14:28.870 [2024-11-19 10:42:16.372734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.870 [2024-11-19 10:42:16.429491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.127 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.127 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:29.127 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.384 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:29.949 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 73d3d02a-bf8e-490a-842a-4579b8a84715 00:14:29.949 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:29.949 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 73D3D02ABF8E490A842A4579B8A84715 -i 00:14:29.949 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 99bdad99-bd19-4562-9dde-7b2d3156d9d2 00:14:29.949 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:29.949 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 99BDAD99BD1945629DDE7B2D3156D9D2 -i 00:14:30.207 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:30.771 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:30.771 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:30.771 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:31.336 nvme0n1 00:14:31.336 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:31.336 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:31.593 nvme1n2 00:14:31.593 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:31.593 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:31.593 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:31.593 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:31.593 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:32.158 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:32.158 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:32.158 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:32.158 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:32.159 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 73d3d02a-bf8e-490a-842a-4579b8a84715 == \7\3\d\3\d\0\2\a\-\b\f\8\e\-\4\9\0\a\-\8\4\2\a\-\4\5\7\9\b\8\a\8\4\7\1\5 ]] 00:14:32.159 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:32.159 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:32.159 10:42:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:32.417 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 99bdad99-bd19-4562-9dde-7b2d3156d9d2 == \9\9\b\d\a\d\9\9\-\b\d\1\9\-\4\5\6\2\-\9\d\d\e\-\7\b\2\d\3\1\5\6\d\9\d\2 ]] 00:14:32.417 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.983 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:32.983 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 73d3d02a-bf8e-490a-842a-4579b8a84715 00:14:32.983 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:32.983 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 73D3D02ABF8E490A842A4579B8A84715 00:14:32.983 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:32.983 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 73D3D02ABF8E490A842A4579B8A84715 00:14:32.983 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.983 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:32.983 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.983 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:32.983 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.983 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:32.983 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.983 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:32.983 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 73D3D02ABF8E490A842A4579B8A84715 00:14:33.241 [2024-11-19 10:42:20.836785] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:33.241 [2024-11-19 10:42:20.836825] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:33.241 [2024-11-19 10:42:20.836854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.241 request: 00:14:33.241 { 00:14:33.241 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.241 "namespace": { 00:14:33.241 "bdev_name": "invalid", 00:14:33.241 "nsid": 1, 00:14:33.241 "nguid": "73D3D02ABF8E490A842A4579B8A84715", 00:14:33.241 "no_auto_visible": false 00:14:33.241 }, 00:14:33.241 "method": "nvmf_subsystem_add_ns", 00:14:33.241 "req_id": 1 00:14:33.241 } 00:14:33.241 Got JSON-RPC error response 00:14:33.241 response: 00:14:33.241 { 00:14:33.241 "code": -32602, 00:14:33.241 "message": "Invalid parameters" 00:14:33.241 } 00:14:33.241 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:33.241 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:33.241 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:33.241 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:33.241 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 73d3d02a-bf8e-490a-842a-4579b8a84715 00:14:33.241 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:33.499 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 73D3D02ABF8E490A842A4579B8A84715 -i 00:14:33.757 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:35.654 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:35.654 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:35.654 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:35.911 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:35.911 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1314107 00:14:35.911 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1314107 ']' 00:14:35.911 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1314107 00:14:35.911 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:35.911 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:35.911 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1314107 00:14:35.911 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:35.911 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:35.911 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1314107' 00:14:35.911 killing process with pid 1314107 00:14:35.911 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1314107 00:14:35.911 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1314107 00:14:36.475 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:36.733 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:36.733 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:36.733 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:36.733 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:36.733 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:36.733 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:36.733 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:36.733 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:36.733 rmmod nvme_tcp 00:14:36.733 rmmod nvme_fabrics 00:14:36.733 rmmod nvme_keyring 00:14:36.733 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:36.733 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:36.733 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:36.733 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1311975 ']' 00:14:36.733 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1311975 00:14:36.733 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1311975 ']' 00:14:36.733 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1311975 00:14:36.733 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:36.733 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:36.733 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1311975 00:14:36.733 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:36.733 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:36.733 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1311975' 00:14:36.733 killing process with pid 1311975 00:14:36.733 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1311975 00:14:36.733 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1311975 00:14:36.991 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:36.991 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:36.991 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:36.991 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:36.991 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:36.991 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:36.991 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:36.991 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:36.991 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:36.991 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.991 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:36.991 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.529 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:39.529 00:14:39.529 real 0m25.411s 00:14:39.529 user 0m36.813s 00:14:39.529 sys 0m4.651s 00:14:39.529 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:39.529 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:39.529 ************************************ 00:14:39.529 END TEST nvmf_ns_masking 00:14:39.530 ************************************ 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:39.530 ************************************ 00:14:39.530 START TEST nvmf_nvme_cli 00:14:39.530 ************************************ 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:39.530 * Looking for test storage... 00:14:39.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:39.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.530 --rc genhtml_branch_coverage=1 00:14:39.530 --rc genhtml_function_coverage=1 00:14:39.530 --rc genhtml_legend=1 00:14:39.530 --rc geninfo_all_blocks=1 00:14:39.530 --rc geninfo_unexecuted_blocks=1 00:14:39.530 00:14:39.530 ' 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:39.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.530 --rc genhtml_branch_coverage=1 00:14:39.530 --rc genhtml_function_coverage=1 00:14:39.530 --rc genhtml_legend=1 00:14:39.530 --rc geninfo_all_blocks=1 00:14:39.530 --rc geninfo_unexecuted_blocks=1 00:14:39.530 00:14:39.530 ' 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:39.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.530 --rc genhtml_branch_coverage=1 00:14:39.530 --rc genhtml_function_coverage=1 00:14:39.530 --rc genhtml_legend=1 00:14:39.530 --rc geninfo_all_blocks=1 00:14:39.530 --rc geninfo_unexecuted_blocks=1 00:14:39.530 00:14:39.530 ' 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:39.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.530 --rc genhtml_branch_coverage=1 00:14:39.530 --rc genhtml_function_coverage=1 00:14:39.530 --rc genhtml_legend=1 00:14:39.530 --rc geninfo_all_blocks=1 00:14:39.530 --rc geninfo_unexecuted_blocks=1 00:14:39.530 00:14:39.530 ' 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.530 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:39.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:39.531 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:41.430 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:41.430 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:41.430 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:41.431 Found net devices under 0000:09:00.0: cvl_0_0 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:41.431 Found net devices under 0000:09:00.1: cvl_0_1 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:41.431 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:41.431 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:41.431 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:41.431 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:41.431 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:41.689 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:41.689 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:41.689 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:41.689 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:41.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:14:41.689 00:14:41.689 --- 10.0.0.2 ping statistics --- 00:14:41.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.689 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:14:41.689 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:41.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:14:41.689 00:14:41.689 --- 10.0.0.1 ping statistics --- 00:14:41.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.689 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:14:41.689 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.689 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:41.689 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:41.689 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.689 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:41.689 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:41.689 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.689 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:41.689 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:41.689 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:41.689 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:41.689 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:41.689 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.689 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1317024 00:14:41.690 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:41.690 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1317024 00:14:41.690 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1317024 ']' 00:14:41.690 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.690 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:41.690 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.690 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:41.690 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.690 [2024-11-19 10:42:29.176462] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:14:41.690 [2024-11-19 10:42:29.176559] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.690 [2024-11-19 10:42:29.249134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:41.949 [2024-11-19 10:42:29.311999] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.949 [2024-11-19 10:42:29.312047] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.949 [2024-11-19 10:42:29.312080] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.949 [2024-11-19 10:42:29.312093] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.949 [2024-11-19 10:42:29.312103] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.949 [2024-11-19 10:42:29.313840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.949 [2024-11-19 10:42:29.314106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.949 [2024-11-19 10:42:29.314225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:41.949 [2024-11-19 10:42:29.314230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.949 [2024-11-19 10:42:29.460034] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.949 Malloc0 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.949 Malloc1 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.949 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:41.950 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.950 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.950 [2024-11-19 10:42:29.563362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.950 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.950 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:41.950 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.950 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.208 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.208 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:14:42.208 00:14:42.208 Discovery Log Number of Records 2, Generation counter 2 00:14:42.208 =====Discovery Log Entry 0====== 00:14:42.208 trtype: tcp 00:14:42.208 adrfam: ipv4 00:14:42.208 subtype: current discovery subsystem 00:14:42.208 treq: not required 00:14:42.208 portid: 0 00:14:42.208 trsvcid: 4420 00:14:42.208 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:42.208 traddr: 10.0.0.2 00:14:42.208 eflags: explicit discovery connections, duplicate discovery information 00:14:42.208 sectype: none 00:14:42.208 =====Discovery Log Entry 1====== 00:14:42.208 trtype: tcp 00:14:42.208 adrfam: ipv4 00:14:42.208 subtype: nvme subsystem 00:14:42.208 treq: not required 00:14:42.208 portid: 0 00:14:42.208 trsvcid: 4420 00:14:42.208 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:42.208 traddr: 10.0.0.2 00:14:42.208 eflags: none 00:14:42.208 sectype: none 00:14:42.208 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:42.208 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:42.209 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:42.209 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:42.209 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:42.209 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:42.209 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:42.209 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:42.209 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:42.209 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:42.209 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:42.773 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:42.773 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:42.773 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:42.773 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:42.773 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:42.773 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:45.295 /dev/nvme0n2 ]] 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:45.295 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:45.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.554 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:45.554 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:45.554 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:45.554 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.554 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:45.554 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.554 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:45.554 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:45.554 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.554 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.554 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.554 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.554 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:45.554 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:45.554 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:45.554 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:45.554 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:45.554 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:45.554 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:45.554 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:45.554 rmmod nvme_tcp 00:14:45.554 rmmod nvme_fabrics 00:14:45.554 rmmod nvme_keyring 00:14:45.554 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:45.554 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:45.554 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:45.554 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1317024 ']' 00:14:45.554 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1317024 00:14:45.554 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1317024 ']' 00:14:45.554 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1317024 00:14:45.554 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:45.554 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:45.554 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1317024 00:14:45.554 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:45.554 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:45.554 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1317024' 00:14:45.554 killing process with pid 1317024 00:14:45.554 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1317024 00:14:45.554 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1317024 00:14:45.812 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:45.812 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:45.812 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:45.812 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:45.812 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:45.812 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:45.812 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:45.812 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:45.812 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:45.812 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.812 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.812 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:48.347 00:14:48.347 real 0m8.703s 00:14:48.347 user 0m16.548s 00:14:48.347 sys 0m2.365s 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:48.347 ************************************ 00:14:48.347 END TEST nvmf_nvme_cli 00:14:48.347 ************************************ 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:48.347 ************************************ 00:14:48.347 START TEST nvmf_vfio_user 00:14:48.347 ************************************ 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:48.347 * Looking for test storage... 00:14:48.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:48.347 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:48.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.348 --rc genhtml_branch_coverage=1 00:14:48.348 --rc genhtml_function_coverage=1 00:14:48.348 --rc genhtml_legend=1 00:14:48.348 --rc geninfo_all_blocks=1 00:14:48.348 --rc geninfo_unexecuted_blocks=1 00:14:48.348 00:14:48.348 ' 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:48.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.348 --rc genhtml_branch_coverage=1 00:14:48.348 --rc genhtml_function_coverage=1 00:14:48.348 --rc genhtml_legend=1 00:14:48.348 --rc geninfo_all_blocks=1 00:14:48.348 --rc geninfo_unexecuted_blocks=1 00:14:48.348 00:14:48.348 ' 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:48.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.348 --rc genhtml_branch_coverage=1 00:14:48.348 --rc genhtml_function_coverage=1 00:14:48.348 --rc genhtml_legend=1 00:14:48.348 --rc geninfo_all_blocks=1 00:14:48.348 --rc geninfo_unexecuted_blocks=1 00:14:48.348 00:14:48.348 ' 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:48.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.348 --rc genhtml_branch_coverage=1 00:14:48.348 --rc genhtml_function_coverage=1 00:14:48.348 --rc genhtml_legend=1 00:14:48.348 --rc geninfo_all_blocks=1 00:14:48.348 --rc geninfo_unexecuted_blocks=1 00:14:48.348 00:14:48.348 ' 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:48.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.348 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:48.349 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:48.349 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:48.349 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:48.349 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:48.349 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:48.349 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1317959 00:14:48.349 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:48.349 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1317959' 00:14:48.349 Process pid: 1317959 00:14:48.349 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:48.349 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1317959 00:14:48.349 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1317959 ']' 00:14:48.349 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.349 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.349 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.349 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.349 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:48.349 [2024-11-19 10:42:35.660251] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:14:48.349 [2024-11-19 10:42:35.660351] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.349 [2024-11-19 10:42:35.727308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:48.349 [2024-11-19 10:42:35.787151] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.349 [2024-11-19 10:42:35.787206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.349 [2024-11-19 10:42:35.787219] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.349 [2024-11-19 10:42:35.787229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.349 [2024-11-19 10:42:35.787238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.349 [2024-11-19 10:42:35.788839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.349 [2024-11-19 10:42:35.788897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.349 [2024-11-19 10:42:35.788977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.349 [2024-11-19 10:42:35.788973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:48.349 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:48.349 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:48.349 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:49.722 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:49.722 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:49.722 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:49.722 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:49.722 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:49.722 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:49.980 Malloc1 00:14:49.980 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:50.237 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:50.495 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:50.752 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:50.752 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:50.752 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:51.010 Malloc2 00:14:51.010 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:51.267 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:51.525 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:52.090 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:52.090 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:52.090 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:52.090 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:52.090 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:52.090 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:52.090 [2024-11-19 10:42:39.431075] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:14:52.090 [2024-11-19 10:42:39.431118] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1318380 ] 00:14:52.090 [2024-11-19 10:42:39.478490] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:52.090 [2024-11-19 10:42:39.491811] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:52.090 [2024-11-19 10:42:39.491845] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f70b8b62000 00:14:52.090 [2024-11-19 10:42:39.492806] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.090 [2024-11-19 10:42:39.493807] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.090 [2024-11-19 10:42:39.494809] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.090 [2024-11-19 10:42:39.495811] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:52.090 [2024-11-19 10:42:39.496820] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:52.090 [2024-11-19 10:42:39.497825] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.090 [2024-11-19 10:42:39.498830] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:52.090 [2024-11-19 10:42:39.499831] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.090 [2024-11-19 10:42:39.500851] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:52.090 [2024-11-19 10:42:39.500871] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f70b8b57000 00:14:52.090 [2024-11-19 10:42:39.502124] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:52.090 [2024-11-19 10:42:39.517795] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:52.090 [2024-11-19 10:42:39.517838] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:52.090 [2024-11-19 10:42:39.519974] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:52.090 [2024-11-19 10:42:39.520029] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:52.090 [2024-11-19 10:42:39.520123] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:52.090 [2024-11-19 10:42:39.520160] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:52.090 [2024-11-19 10:42:39.520180] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:52.090 [2024-11-19 10:42:39.520972] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:52.090 [2024-11-19 10:42:39.520994] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:52.090 [2024-11-19 10:42:39.521007] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:52.090 [2024-11-19 10:42:39.521975] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:52.090 [2024-11-19 10:42:39.521995] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:52.090 [2024-11-19 10:42:39.522009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:52.090 [2024-11-19 10:42:39.522977] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:52.090 [2024-11-19 10:42:39.522996] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:52.090 [2024-11-19 10:42:39.523982] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:52.090 [2024-11-19 10:42:39.524001] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:52.090 [2024-11-19 10:42:39.524010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:52.090 [2024-11-19 10:42:39.524021] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:52.090 [2024-11-19 10:42:39.524131] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:52.090 [2024-11-19 10:42:39.524139] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:52.090 [2024-11-19 10:42:39.524148] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:52.090 [2024-11-19 10:42:39.525006] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:52.090 [2024-11-19 10:42:39.525992] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:52.090 [2024-11-19 10:42:39.526999] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:52.090 [2024-11-19 10:42:39.527997] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:52.090 [2024-11-19 10:42:39.528118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:52.090 [2024-11-19 10:42:39.529020] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:52.090 [2024-11-19 10:42:39.529039] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:52.090 [2024-11-19 10:42:39.529049] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:52.090 [2024-11-19 10:42:39.529078] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:52.090 [2024-11-19 10:42:39.529092] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:52.090 [2024-11-19 10:42:39.529122] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:52.090 [2024-11-19 10:42:39.529133] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:52.090 [2024-11-19 10:42:39.529140] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.090 [2024-11-19 10:42:39.529160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:52.091 [2024-11-19 10:42:39.529225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:52.091 [2024-11-19 10:42:39.529244] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:52.091 [2024-11-19 10:42:39.529253] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:52.091 [2024-11-19 10:42:39.529260] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:52.091 [2024-11-19 10:42:39.529268] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:52.091 [2024-11-19 10:42:39.529361] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:52.091 [2024-11-19 10:42:39.529375] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:52.091 [2024-11-19 10:42:39.529383] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:52.091 [2024-11-19 10:42:39.529402] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:52.091 [2024-11-19 10:42:39.529419] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:52.091 [2024-11-19 10:42:39.529442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:52.091 [2024-11-19 10:42:39.529460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.091 [2024-11-19 10:42:39.529473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.091 [2024-11-19 10:42:39.529486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.091 [2024-11-19 10:42:39.529498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.091 [2024-11-19 10:42:39.529507] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:52.091 [2024-11-19 10:42:39.529519] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:52.091 [2024-11-19 10:42:39.529532] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:52.091 [2024-11-19 10:42:39.529547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:52.091 [2024-11-19 10:42:39.529566] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:52.091 [2024-11-19 10:42:39.529577] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:52.091 [2024-11-19 10:42:39.529604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:52.091 [2024-11-19 10:42:39.529614] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:52.091 [2024-11-19 10:42:39.529628] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:52.091 [2024-11-19 10:42:39.529640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:52.091 [2024-11-19 10:42:39.529720] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:52.091 [2024-11-19 10:42:39.529737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:52.091 [2024-11-19 10:42:39.529751] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:52.091 [2024-11-19 10:42:39.529759] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:52.091 [2024-11-19 10:42:39.529765] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.091 [2024-11-19 10:42:39.529774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:52.091 [2024-11-19 10:42:39.529792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:52.091 [2024-11-19 10:42:39.529817] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:52.091 [2024-11-19 10:42:39.529834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:52.091 [2024-11-19 10:42:39.529849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:52.091 [2024-11-19 10:42:39.529860] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:52.091 [2024-11-19 10:42:39.529869] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:52.091 [2024-11-19 10:42:39.529874] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.091 [2024-11-19 10:42:39.529883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:52.091 [2024-11-19 10:42:39.529911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:52.091 [2024-11-19 10:42:39.529935] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:52.091 [2024-11-19 10:42:39.529950] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:52.091 [2024-11-19 10:42:39.529962] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:52.091 [2024-11-19 10:42:39.529970] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:52.091 [2024-11-19 10:42:39.529976] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.091 [2024-11-19 10:42:39.529989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:52.091 [2024-11-19 10:42:39.530006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:52.091 [2024-11-19 10:42:39.530021] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:52.091 [2024-11-19 10:42:39.530033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:52.091 [2024-11-19 10:42:39.530046] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:52.091 [2024-11-19 10:42:39.530057] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:52.091 [2024-11-19 10:42:39.530065] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:52.091 [2024-11-19 10:42:39.530074] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:52.091 [2024-11-19 10:42:39.530083] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:52.091 [2024-11-19 10:42:39.530091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:52.091 [2024-11-19 10:42:39.530099] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:52.091 [2024-11-19 10:42:39.530125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:52.091 [2024-11-19 10:42:39.530145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:52.091 [2024-11-19 10:42:39.530163] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:52.091 [2024-11-19 10:42:39.530175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:52.091 [2024-11-19 10:42:39.530195] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:52.091 [2024-11-19 10:42:39.530207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:52.091 [2024-11-19 10:42:39.530223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:52.091 [2024-11-19 10:42:39.530237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:52.091 [2024-11-19 10:42:39.530259] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:52.092 [2024-11-19 10:42:39.530269] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:52.092 [2024-11-19 10:42:39.530276] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:52.092 [2024-11-19 10:42:39.530281] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:52.092 [2024-11-19 10:42:39.530312] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:52.092 [2024-11-19 10:42:39.530323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:52.092 [2024-11-19 10:42:39.530336] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:52.092 [2024-11-19 10:42:39.530372] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:52.092 [2024-11-19 10:42:39.530380] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.092 [2024-11-19 10:42:39.530390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:52.092 [2024-11-19 10:42:39.530402] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:52.092 [2024-11-19 10:42:39.530411] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:52.092 [2024-11-19 10:42:39.530417] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.092 [2024-11-19 10:42:39.530426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:52.092 [2024-11-19 10:42:39.530439] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:52.092 [2024-11-19 10:42:39.530447] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:52.092 [2024-11-19 10:42:39.530453] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.092 [2024-11-19 10:42:39.530462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:52.092 [2024-11-19 10:42:39.530474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:52.092 [2024-11-19 10:42:39.530498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:52.092 [2024-11-19 10:42:39.530518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:52.092 [2024-11-19 10:42:39.530531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:52.092 ===================================================== 00:14:52.092 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:52.092 ===================================================== 00:14:52.092 Controller Capabilities/Features 00:14:52.092 ================================ 00:14:52.092 Vendor ID: 4e58 00:14:52.092 Subsystem Vendor ID: 4e58 00:14:52.092 Serial Number: SPDK1 00:14:52.092 Model Number: SPDK bdev Controller 00:14:52.092 Firmware Version: 25.01 00:14:52.092 Recommended Arb Burst: 6 00:14:52.092 IEEE OUI Identifier: 8d 6b 50 00:14:52.092 Multi-path I/O 00:14:52.092 May have multiple subsystem ports: Yes 00:14:52.092 May have multiple controllers: Yes 00:14:52.092 Associated with SR-IOV VF: No 00:14:52.092 Max Data Transfer Size: 131072 00:14:52.092 Max Number of Namespaces: 32 00:14:52.092 Max Number of I/O Queues: 127 00:14:52.092 NVMe Specification Version (VS): 1.3 00:14:52.092 NVMe Specification Version (Identify): 1.3 00:14:52.092 Maximum Queue Entries: 256 00:14:52.092 Contiguous Queues Required: Yes 00:14:52.092 Arbitration Mechanisms Supported 00:14:52.092 Weighted Round Robin: Not Supported 00:14:52.092 Vendor Specific: Not Supported 00:14:52.092 Reset Timeout: 15000 ms 00:14:52.092 Doorbell Stride: 4 bytes 00:14:52.092 NVM Subsystem Reset: Not Supported 00:14:52.092 Command Sets Supported 00:14:52.092 NVM Command Set: Supported 00:14:52.092 Boot Partition: Not Supported 00:14:52.092 Memory Page Size Minimum: 4096 bytes 00:14:52.092 Memory Page Size Maximum: 4096 bytes 00:14:52.092 Persistent Memory Region: Not Supported 00:14:52.092 Optional Asynchronous Events Supported 00:14:52.092 Namespace Attribute Notices: Supported 00:14:52.092 Firmware Activation Notices: Not Supported 00:14:52.092 ANA Change Notices: Not Supported 00:14:52.092 PLE Aggregate Log Change Notices: Not Supported 00:14:52.092 LBA Status Info Alert Notices: Not Supported 00:14:52.092 EGE Aggregate Log Change Notices: Not Supported 00:14:52.092 Normal NVM Subsystem Shutdown event: Not Supported 00:14:52.092 Zone Descriptor Change Notices: Not Supported 00:14:52.092 Discovery Log Change Notices: Not Supported 00:14:52.092 Controller Attributes 00:14:52.092 128-bit Host Identifier: Supported 00:14:52.092 Non-Operational Permissive Mode: Not Supported 00:14:52.092 NVM Sets: Not Supported 00:14:52.092 Read Recovery Levels: Not Supported 00:14:52.092 Endurance Groups: Not Supported 00:14:52.092 Predictable Latency Mode: Not Supported 00:14:52.092 Traffic Based Keep ALive: Not Supported 00:14:52.092 Namespace Granularity: Not Supported 00:14:52.092 SQ Associations: Not Supported 00:14:52.092 UUID List: Not Supported 00:14:52.092 Multi-Domain Subsystem: Not Supported 00:14:52.092 Fixed Capacity Management: Not Supported 00:14:52.092 Variable Capacity Management: Not Supported 00:14:52.092 Delete Endurance Group: Not Supported 00:14:52.092 Delete NVM Set: Not Supported 00:14:52.092 Extended LBA Formats Supported: Not Supported 00:14:52.092 Flexible Data Placement Supported: Not Supported 00:14:52.092 00:14:52.092 Controller Memory Buffer Support 00:14:52.092 ================================ 00:14:52.092 Supported: No 00:14:52.092 00:14:52.092 Persistent Memory Region Support 00:14:52.092 ================================ 00:14:52.092 Supported: No 00:14:52.092 00:14:52.092 Admin Command Set Attributes 00:14:52.092 ============================ 00:14:52.092 Security Send/Receive: Not Supported 00:14:52.092 Format NVM: Not Supported 00:14:52.092 Firmware Activate/Download: Not Supported 00:14:52.092 Namespace Management: Not Supported 00:14:52.092 Device Self-Test: Not Supported 00:14:52.092 Directives: Not Supported 00:14:52.092 NVMe-MI: Not Supported 00:14:52.092 Virtualization Management: Not Supported 00:14:52.092 Doorbell Buffer Config: Not Supported 00:14:52.092 Get LBA Status Capability: Not Supported 00:14:52.092 Command & Feature Lockdown Capability: Not Supported 00:14:52.092 Abort Command Limit: 4 00:14:52.092 Async Event Request Limit: 4 00:14:52.092 Number of Firmware Slots: N/A 00:14:52.092 Firmware Slot 1 Read-Only: N/A 00:14:52.092 Firmware Activation Without Reset: N/A 00:14:52.092 Multiple Update Detection Support: N/A 00:14:52.092 Firmware Update Granularity: No Information Provided 00:14:52.092 Per-Namespace SMART Log: No 00:14:52.092 Asymmetric Namespace Access Log Page: Not Supported 00:14:52.092 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:52.092 Command Effects Log Page: Supported 00:14:52.092 Get Log Page Extended Data: Supported 00:14:52.092 Telemetry Log Pages: Not Supported 00:14:52.092 Persistent Event Log Pages: Not Supported 00:14:52.092 Supported Log Pages Log Page: May Support 00:14:52.092 Commands Supported & Effects Log Page: Not Supported 00:14:52.092 Feature Identifiers & Effects Log Page:May Support 00:14:52.092 NVMe-MI Commands & Effects Log Page: May Support 00:14:52.092 Data Area 4 for Telemetry Log: Not Supported 00:14:52.092 Error Log Page Entries Supported: 128 00:14:52.092 Keep Alive: Supported 00:14:52.093 Keep Alive Granularity: 10000 ms 00:14:52.093 00:14:52.093 NVM Command Set Attributes 00:14:52.093 ========================== 00:14:52.093 Submission Queue Entry Size 00:14:52.093 Max: 64 00:14:52.093 Min: 64 00:14:52.093 Completion Queue Entry Size 00:14:52.093 Max: 16 00:14:52.093 Min: 16 00:14:52.093 Number of Namespaces: 32 00:14:52.093 Compare Command: Supported 00:14:52.093 Write Uncorrectable Command: Not Supported 00:14:52.093 Dataset Management Command: Supported 00:14:52.093 Write Zeroes Command: Supported 00:14:52.093 Set Features Save Field: Not Supported 00:14:52.093 Reservations: Not Supported 00:14:52.093 Timestamp: Not Supported 00:14:52.093 Copy: Supported 00:14:52.093 Volatile Write Cache: Present 00:14:52.093 Atomic Write Unit (Normal): 1 00:14:52.093 Atomic Write Unit (PFail): 1 00:14:52.093 Atomic Compare & Write Unit: 1 00:14:52.093 Fused Compare & Write: Supported 00:14:52.093 Scatter-Gather List 00:14:52.093 SGL Command Set: Supported (Dword aligned) 00:14:52.093 SGL Keyed: Not Supported 00:14:52.093 SGL Bit Bucket Descriptor: Not Supported 00:14:52.093 SGL Metadata Pointer: Not Supported 00:14:52.093 Oversized SGL: Not Supported 00:14:52.093 SGL Metadata Address: Not Supported 00:14:52.093 SGL Offset: Not Supported 00:14:52.093 Transport SGL Data Block: Not Supported 00:14:52.093 Replay Protected Memory Block: Not Supported 00:14:52.093 00:14:52.093 Firmware Slot Information 00:14:52.093 ========================= 00:14:52.093 Active slot: 1 00:14:52.093 Slot 1 Firmware Revision: 25.01 00:14:52.093 00:14:52.093 00:14:52.093 Commands Supported and Effects 00:14:52.093 ============================== 00:14:52.093 Admin Commands 00:14:52.093 -------------- 00:14:52.093 Get Log Page (02h): Supported 00:14:52.093 Identify (06h): Supported 00:14:52.093 Abort (08h): Supported 00:14:52.093 Set Features (09h): Supported 00:14:52.093 Get Features (0Ah): Supported 00:14:52.093 Asynchronous Event Request (0Ch): Supported 00:14:52.093 Keep Alive (18h): Supported 00:14:52.093 I/O Commands 00:14:52.093 ------------ 00:14:52.093 Flush (00h): Supported LBA-Change 00:14:52.093 Write (01h): Supported LBA-Change 00:14:52.093 Read (02h): Supported 00:14:52.093 Compare (05h): Supported 00:14:52.093 Write Zeroes (08h): Supported LBA-Change 00:14:52.093 Dataset Management (09h): Supported LBA-Change 00:14:52.093 Copy (19h): Supported LBA-Change 00:14:52.093 00:14:52.093 Error Log 00:14:52.093 ========= 00:14:52.093 00:14:52.093 Arbitration 00:14:52.093 =========== 00:14:52.093 Arbitration Burst: 1 00:14:52.093 00:14:52.093 Power Management 00:14:52.093 ================ 00:14:52.093 Number of Power States: 1 00:14:52.093 Current Power State: Power State #0 00:14:52.093 Power State #0: 00:14:52.093 Max Power: 0.00 W 00:14:52.093 Non-Operational State: Operational 00:14:52.093 Entry Latency: Not Reported 00:14:52.093 Exit Latency: Not Reported 00:14:52.093 Relative Read Throughput: 0 00:14:52.093 Relative Read Latency: 0 00:14:52.093 Relative Write Throughput: 0 00:14:52.093 Relative Write Latency: 0 00:14:52.093 Idle Power: Not Reported 00:14:52.093 Active Power: Not Reported 00:14:52.093 Non-Operational Permissive Mode: Not Supported 00:14:52.093 00:14:52.093 Health Information 00:14:52.093 ================== 00:14:52.093 Critical Warnings: 00:14:52.093 Available Spare Space: OK 00:14:52.093 Temperature: OK 00:14:52.093 Device Reliability: OK 00:14:52.093 Read Only: No 00:14:52.093 Volatile Memory Backup: OK 00:14:52.093 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:52.093 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:52.093 Available Spare: 0% 00:14:52.093 Available Sp[2024-11-19 10:42:39.530684] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:52.093 [2024-11-19 10:42:39.530701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:52.093 [2024-11-19 10:42:39.530743] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:52.093 [2024-11-19 10:42:39.530761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.093 [2024-11-19 10:42:39.530772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.093 [2024-11-19 10:42:39.530782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.093 [2024-11-19 10:42:39.530791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.093 [2024-11-19 10:42:39.533318] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:52.093 [2024-11-19 10:42:39.533342] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:52.093 [2024-11-19 10:42:39.534029] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:52.093 [2024-11-19 10:42:39.534123] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:52.093 [2024-11-19 10:42:39.534136] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:52.093 [2024-11-19 10:42:39.535036] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:52.093 [2024-11-19 10:42:39.535063] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:52.093 [2024-11-19 10:42:39.535120] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:52.093 [2024-11-19 10:42:39.538314] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:52.093 are Threshold: 0% 00:14:52.093 Life Percentage Used: 0% 00:14:52.093 Data Units Read: 0 00:14:52.093 Data Units Written: 0 00:14:52.093 Host Read Commands: 0 00:14:52.093 Host Write Commands: 0 00:14:52.093 Controller Busy Time: 0 minutes 00:14:52.093 Power Cycles: 0 00:14:52.093 Power On Hours: 0 hours 00:14:52.093 Unsafe Shutdowns: 0 00:14:52.093 Unrecoverable Media Errors: 0 00:14:52.093 Lifetime Error Log Entries: 0 00:14:52.093 Warning Temperature Time: 0 minutes 00:14:52.093 Critical Temperature Time: 0 minutes 00:14:52.093 00:14:52.093 Number of Queues 00:14:52.093 ================ 00:14:52.093 Number of I/O Submission Queues: 127 00:14:52.093 Number of I/O Completion Queues: 127 00:14:52.093 00:14:52.093 Active Namespaces 00:14:52.093 ================= 00:14:52.093 Namespace ID:1 00:14:52.093 Error Recovery Timeout: Unlimited 00:14:52.093 Command Set Identifier: NVM (00h) 00:14:52.093 Deallocate: Supported 00:14:52.093 Deallocated/Unwritten Error: Not Supported 00:14:52.093 Deallocated Read Value: Unknown 00:14:52.093 Deallocate in Write Zeroes: Not Supported 00:14:52.093 Deallocated Guard Field: 0xFFFF 00:14:52.093 Flush: Supported 00:14:52.093 Reservation: Supported 00:14:52.093 Namespace Sharing Capabilities: Multiple Controllers 00:14:52.093 Size (in LBAs): 131072 (0GiB) 00:14:52.093 Capacity (in LBAs): 131072 (0GiB) 00:14:52.094 Utilization (in LBAs): 131072 (0GiB) 00:14:52.094 NGUID: D1FF4F8E84F74B039744EACC006CC5C2 00:14:52.094 UUID: d1ff4f8e-84f7-4b03-9744-eacc006cc5c2 00:14:52.094 Thin Provisioning: Not Supported 00:14:52.094 Per-NS Atomic Units: Yes 00:14:52.094 Atomic Boundary Size (Normal): 0 00:14:52.094 Atomic Boundary Size (PFail): 0 00:14:52.094 Atomic Boundary Offset: 0 00:14:52.094 Maximum Single Source Range Length: 65535 00:14:52.094 Maximum Copy Length: 65535 00:14:52.094 Maximum Source Range Count: 1 00:14:52.094 NGUID/EUI64 Never Reused: No 00:14:52.094 Namespace Write Protected: No 00:14:52.094 Number of LBA Formats: 1 00:14:52.094 Current LBA Format: LBA Format #00 00:14:52.094 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:52.094 00:14:52.094 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:52.352 [2024-11-19 10:42:39.791208] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:57.662 Initializing NVMe Controllers 00:14:57.662 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:57.662 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:57.662 Initialization complete. Launching workers. 00:14:57.662 ======================================================== 00:14:57.662 Latency(us) 00:14:57.662 Device Information : IOPS MiB/s Average min max 00:14:57.662 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 32909.79 128.55 3889.34 1178.45 8320.02 00:14:57.662 ======================================================== 00:14:57.662 Total : 32909.79 128.55 3889.34 1178.45 8320.02 00:14:57.662 00:14:57.662 [2024-11-19 10:42:44.814088] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:57.662 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:57.662 [2024-11-19 10:42:45.081355] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:02.984 Initializing NVMe Controllers 00:15:02.984 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:02.984 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:02.984 Initialization complete. Launching workers. 00:15:02.984 ======================================================== 00:15:02.984 Latency(us) 00:15:02.984 Device Information : IOPS MiB/s Average min max 00:15:02.984 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16049.60 62.69 7982.05 6002.87 14268.91 00:15:02.984 ======================================================== 00:15:02.984 Total : 16049.60 62.69 7982.05 6002.87 14268.91 00:15:02.984 00:15:02.984 [2024-11-19 10:42:50.115951] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:02.984 10:42:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:02.984 [2024-11-19 10:42:50.364220] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:08.245 [2024-11-19 10:42:55.424612] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:08.245 Initializing NVMe Controllers 00:15:08.245 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:08.245 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:08.245 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:08.245 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:08.245 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:08.245 Initialization complete. Launching workers. 00:15:08.245 Starting thread on core 2 00:15:08.245 Starting thread on core 3 00:15:08.245 Starting thread on core 1 00:15:08.245 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:08.245 [2024-11-19 10:42:55.753815] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.527 [2024-11-19 10:42:58.923588] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.527 Initializing NVMe Controllers 00:15:11.527 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.527 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.527 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:11.527 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:11.527 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:11.527 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:11.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:11.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:11.527 Initialization complete. Launching workers. 00:15:11.527 Starting thread on core 1 with urgent priority queue 00:15:11.527 Starting thread on core 2 with urgent priority queue 00:15:11.527 Starting thread on core 3 with urgent priority queue 00:15:11.527 Starting thread on core 0 with urgent priority queue 00:15:11.527 SPDK bdev Controller (SPDK1 ) core 0: 2879.00 IO/s 34.73 secs/100000 ios 00:15:11.527 SPDK bdev Controller (SPDK1 ) core 1: 3092.00 IO/s 32.34 secs/100000 ios 00:15:11.527 SPDK bdev Controller (SPDK1 ) core 2: 2627.33 IO/s 38.06 secs/100000 ios 00:15:11.527 SPDK bdev Controller (SPDK1 ) core 3: 2735.33 IO/s 36.56 secs/100000 ios 00:15:11.527 ======================================================== 00:15:11.527 00:15:11.527 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:11.783 [2024-11-19 10:42:59.247818] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.783 Initializing NVMe Controllers 00:15:11.783 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.784 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.784 Namespace ID: 1 size: 0GB 00:15:11.784 Initialization complete. 00:15:11.784 INFO: using host memory buffer for IO 00:15:11.784 Hello world! 00:15:11.784 [2024-11-19 10:42:59.282536] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.784 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:12.041 [2024-11-19 10:42:59.599961] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:13.415 Initializing NVMe Controllers 00:15:13.415 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:13.415 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:13.415 Initialization complete. Launching workers. 00:15:13.415 submit (in ns) avg, min, max = 8154.5, 3572.2, 4016028.9 00:15:13.415 complete (in ns) avg, min, max = 26993.5, 2076.7, 4019794.4 00:15:13.415 00:15:13.415 Submit histogram 00:15:13.415 ================ 00:15:13.415 Range in us Cumulative Count 00:15:13.415 3.556 - 3.579: 0.1031% ( 13) 00:15:13.415 3.579 - 3.603: 2.7833% ( 338) 00:15:13.415 3.603 - 3.627: 8.1595% ( 678) 00:15:13.415 3.627 - 3.650: 18.5869% ( 1315) 00:15:13.415 3.650 - 3.674: 27.2381% ( 1091) 00:15:13.415 3.674 - 3.698: 34.4858% ( 914) 00:15:13.415 3.698 - 3.721: 39.7352% ( 662) 00:15:13.415 3.721 - 3.745: 43.8189% ( 515) 00:15:13.415 3.745 - 3.769: 48.0929% ( 539) 00:15:13.415 3.769 - 3.793: 52.1687% ( 514) 00:15:13.415 3.793 - 3.816: 55.6419% ( 438) 00:15:13.415 3.816 - 3.840: 59.5829% ( 497) 00:15:13.415 3.840 - 3.864: 65.3239% ( 724) 00:15:13.415 3.864 - 3.887: 71.3663% ( 762) 00:15:13.415 3.887 - 3.911: 76.5601% ( 655) 00:15:13.415 3.911 - 3.935: 80.7152% ( 524) 00:15:13.415 3.935 - 3.959: 83.0624% ( 296) 00:15:13.415 3.959 - 3.982: 84.8941% ( 231) 00:15:13.415 3.982 - 4.006: 86.7814% ( 238) 00:15:13.415 4.006 - 4.030: 88.2642% ( 187) 00:15:13.415 4.030 - 4.053: 89.7312% ( 185) 00:15:13.415 4.053 - 4.077: 90.9286% ( 151) 00:15:13.415 4.077 - 4.101: 92.4748% ( 195) 00:15:13.415 4.101 - 4.124: 93.5770% ( 139) 00:15:13.415 4.124 - 4.148: 94.3779% ( 101) 00:15:13.415 4.148 - 4.172: 94.9092% ( 67) 00:15:13.415 4.172 - 4.196: 95.3295% ( 53) 00:15:13.415 4.196 - 4.219: 95.5436% ( 27) 00:15:13.415 4.219 - 4.243: 95.7418% ( 25) 00:15:13.415 4.243 - 4.267: 95.8845% ( 18) 00:15:13.415 4.267 - 4.290: 95.9797% ( 12) 00:15:13.415 4.290 - 4.314: 96.2255% ( 31) 00:15:13.415 4.314 - 4.338: 96.3207% ( 12) 00:15:13.415 4.338 - 4.361: 96.4475% ( 16) 00:15:13.415 4.361 - 4.385: 96.5110% ( 8) 00:15:13.415 4.385 - 4.409: 96.5427% ( 4) 00:15:13.415 4.409 - 4.433: 96.5665% ( 3) 00:15:13.415 4.433 - 4.456: 96.6141% ( 6) 00:15:13.415 4.456 - 4.480: 96.6537% ( 5) 00:15:13.415 4.480 - 4.504: 96.6696% ( 2) 00:15:13.415 4.504 - 4.527: 96.6934% ( 3) 00:15:13.415 4.527 - 4.551: 96.7172% ( 3) 00:15:13.415 4.551 - 4.575: 96.7251% ( 1) 00:15:13.415 4.575 - 4.599: 96.7647% ( 5) 00:15:13.415 4.599 - 4.622: 96.8282% ( 8) 00:15:13.415 4.622 - 4.646: 96.8599% ( 4) 00:15:13.415 4.646 - 4.670: 96.8757% ( 2) 00:15:13.415 4.670 - 4.693: 96.9075% ( 4) 00:15:13.415 4.693 - 4.717: 96.9313% ( 3) 00:15:13.415 4.717 - 4.741: 96.9392% ( 1) 00:15:13.415 4.741 - 4.764: 97.0185% ( 10) 00:15:13.415 4.764 - 4.788: 97.0819% ( 8) 00:15:13.415 4.788 - 4.812: 97.1216% ( 5) 00:15:13.415 4.812 - 4.836: 97.1612% ( 5) 00:15:13.415 4.836 - 4.859: 97.2167% ( 7) 00:15:13.415 4.859 - 4.883: 97.2326% ( 2) 00:15:13.415 4.883 - 4.907: 97.3039% ( 9) 00:15:13.415 4.907 - 4.930: 97.3357% ( 4) 00:15:13.415 4.930 - 4.954: 97.3674% ( 4) 00:15:13.415 4.954 - 4.978: 97.3912% ( 3) 00:15:13.415 4.978 - 5.001: 97.4467% ( 7) 00:15:13.415 5.001 - 5.025: 97.4705% ( 3) 00:15:13.415 5.025 - 5.049: 97.5180% ( 6) 00:15:13.415 5.049 - 5.073: 97.5894% ( 9) 00:15:13.415 5.073 - 5.096: 97.6291% ( 5) 00:15:13.415 5.096 - 5.120: 97.6528% ( 3) 00:15:13.415 5.120 - 5.144: 97.6925% ( 5) 00:15:13.415 5.144 - 5.167: 97.7083% ( 2) 00:15:13.415 5.167 - 5.191: 97.7163% ( 1) 00:15:13.415 5.191 - 5.215: 97.7242% ( 1) 00:15:13.415 5.239 - 5.262: 97.7401% ( 2) 00:15:13.415 5.262 - 5.286: 97.7480% ( 1) 00:15:13.415 5.286 - 5.310: 97.7639% ( 2) 00:15:13.415 5.333 - 5.357: 97.7797% ( 2) 00:15:13.415 5.357 - 5.381: 97.7956% ( 2) 00:15:13.415 5.404 - 5.428: 97.8194% ( 3) 00:15:13.415 5.428 - 5.452: 97.8273% ( 1) 00:15:13.415 5.452 - 5.476: 97.8352% ( 1) 00:15:13.415 5.476 - 5.499: 97.8511% ( 2) 00:15:13.415 5.499 - 5.523: 97.8669% ( 2) 00:15:13.415 5.618 - 5.641: 97.8907% ( 3) 00:15:13.415 5.713 - 5.736: 97.8987% ( 1) 00:15:13.415 5.736 - 5.760: 97.9066% ( 1) 00:15:13.415 5.760 - 5.784: 97.9145% ( 1) 00:15:13.415 5.784 - 5.807: 97.9224% ( 1) 00:15:13.415 5.831 - 5.855: 97.9304% ( 1) 00:15:13.415 5.902 - 5.926: 97.9383% ( 1) 00:15:13.415 5.950 - 5.973: 97.9462% ( 1) 00:15:13.415 6.068 - 6.116: 97.9700% ( 3) 00:15:13.415 6.163 - 6.210: 97.9859% ( 2) 00:15:13.415 6.210 - 6.258: 97.9938% ( 1) 00:15:13.415 6.637 - 6.684: 98.0017% ( 1) 00:15:13.415 6.732 - 6.779: 98.0255% ( 3) 00:15:13.415 6.827 - 6.874: 98.0335% ( 1) 00:15:13.415 7.016 - 7.064: 98.0414% ( 1) 00:15:13.415 7.111 - 7.159: 98.0493% ( 1) 00:15:13.415 7.348 - 7.396: 98.0652% ( 2) 00:15:13.415 7.538 - 7.585: 98.0890% ( 3) 00:15:13.415 7.585 - 7.633: 98.0969% ( 1) 00:15:13.415 7.633 - 7.680: 98.1207% ( 3) 00:15:13.415 7.680 - 7.727: 98.1286% ( 1) 00:15:13.415 7.727 - 7.775: 98.1365% ( 1) 00:15:13.415 7.775 - 7.822: 98.1445% ( 1) 00:15:13.415 7.822 - 7.870: 98.1524% ( 1) 00:15:13.415 7.917 - 7.964: 98.1762% ( 3) 00:15:13.415 7.964 - 8.012: 98.1921% ( 2) 00:15:13.415 8.012 - 8.059: 98.2000% ( 1) 00:15:13.415 8.154 - 8.201: 98.2079% ( 1) 00:15:13.415 8.296 - 8.344: 98.2158% ( 1) 00:15:13.415 8.344 - 8.391: 98.2238% ( 1) 00:15:13.415 8.391 - 8.439: 98.2396% ( 2) 00:15:13.416 8.486 - 8.533: 98.2476% ( 1) 00:15:13.416 8.628 - 8.676: 98.2634% ( 2) 00:15:13.416 8.676 - 8.723: 98.2793% ( 2) 00:15:13.416 8.723 - 8.770: 98.2872% ( 1) 00:15:13.416 8.818 - 8.865: 98.2951% ( 1) 00:15:13.416 8.913 - 8.960: 98.3031% ( 1) 00:15:13.416 9.150 - 9.197: 98.3110% ( 1) 00:15:13.416 9.197 - 9.244: 98.3269% ( 2) 00:15:13.416 9.292 - 9.339: 98.3427% ( 2) 00:15:13.416 9.339 - 9.387: 98.3586% ( 2) 00:15:13.416 9.434 - 9.481: 98.3665% ( 1) 00:15:13.416 9.529 - 9.576: 98.3744% ( 1) 00:15:13.416 9.813 - 9.861: 98.3824% ( 1) 00:15:13.416 9.861 - 9.908: 98.3903% ( 1) 00:15:13.416 9.908 - 9.956: 98.3982% ( 1) 00:15:13.416 10.003 - 10.050: 98.4220% ( 3) 00:15:13.416 10.287 - 10.335: 98.4458% ( 3) 00:15:13.416 10.477 - 10.524: 98.4617% ( 2) 00:15:13.416 10.761 - 10.809: 98.4696% ( 1) 00:15:13.416 10.809 - 10.856: 98.4775% ( 1) 00:15:13.416 10.999 - 11.046: 98.4854% ( 1) 00:15:13.416 11.662 - 11.710: 98.5013% ( 2) 00:15:13.416 11.710 - 11.757: 98.5172% ( 2) 00:15:13.416 11.947 - 11.994: 98.5330% ( 2) 00:15:13.416 11.994 - 12.041: 98.5489% ( 2) 00:15:13.416 12.231 - 12.326: 98.5568% ( 1) 00:15:13.416 12.516 - 12.610: 98.5647% ( 1) 00:15:13.416 12.610 - 12.705: 98.5806% ( 2) 00:15:13.416 12.705 - 12.800: 98.5885% ( 1) 00:15:13.416 13.179 - 13.274: 98.6044% ( 2) 00:15:13.416 13.464 - 13.559: 98.6203% ( 2) 00:15:13.416 13.653 - 13.748: 98.6282% ( 1) 00:15:13.416 13.843 - 13.938: 98.6440% ( 2) 00:15:13.416 14.127 - 14.222: 98.6520% ( 1) 00:15:13.416 14.886 - 14.981: 98.6837% ( 4) 00:15:13.416 15.076 - 15.170: 98.6916% ( 1) 00:15:13.416 15.265 - 15.360: 98.6995% ( 1) 00:15:13.416 16.403 - 16.498: 98.7075% ( 1) 00:15:13.416 16.972 - 17.067: 98.7154% ( 1) 00:15:13.416 17.067 - 17.161: 98.7233% ( 1) 00:15:13.416 17.351 - 17.446: 98.7392% ( 2) 00:15:13.416 17.446 - 17.541: 98.7868% ( 6) 00:15:13.416 17.541 - 17.636: 98.8423% ( 7) 00:15:13.416 17.636 - 17.730: 98.9295% ( 11) 00:15:13.416 17.730 - 17.825: 98.9929% ( 8) 00:15:13.416 17.825 - 17.920: 99.0564% ( 8) 00:15:13.416 17.920 - 18.015: 99.1357% ( 10) 00:15:13.416 18.015 - 18.110: 99.2150% ( 10) 00:15:13.416 18.110 - 18.204: 99.3181% ( 13) 00:15:13.416 18.204 - 18.299: 99.3815% ( 8) 00:15:13.416 18.299 - 18.394: 99.4529% ( 9) 00:15:13.416 18.394 - 18.489: 99.5797% ( 16) 00:15:13.416 18.489 - 18.584: 99.6194% ( 5) 00:15:13.416 18.584 - 18.679: 99.6590% ( 5) 00:15:13.416 18.679 - 18.773: 99.6907% ( 4) 00:15:13.416 18.773 - 18.868: 99.7463% ( 7) 00:15:13.416 18.868 - 18.963: 99.7780% ( 4) 00:15:13.416 18.963 - 19.058: 99.7938% ( 2) 00:15:13.416 19.153 - 19.247: 99.8018% ( 1) 00:15:13.416 19.247 - 19.342: 99.8097% ( 1) 00:15:13.416 19.342 - 19.437: 99.8176% ( 1) 00:15:13.416 19.437 - 19.532: 99.8335% ( 2) 00:15:13.416 19.627 - 19.721: 99.8414% ( 1) 00:15:13.416 20.006 - 20.101: 99.8493% ( 1) 00:15:13.416 22.281 - 22.376: 99.8573% ( 1) 00:15:13.416 22.566 - 22.661: 99.8652% ( 1) 00:15:13.416 22.850 - 22.945: 99.8731% ( 1) 00:15:13.416 25.600 - 25.790: 99.8811% ( 1) 00:15:13.416 28.255 - 28.444: 99.8890% ( 1) 00:15:13.416 28.634 - 28.824: 99.8969% ( 1) 00:15:13.416 3980.705 - 4004.978: 99.9841% ( 11) 00:15:13.416 4004.978 - 4029.250: 100.0000% ( 2) 00:15:13.416 00:15:13.416 Complete histogram 00:15:13.416 ================== 00:15:13.416 Range in us Cumulative Count 00:15:13.416 2.074 - 2.086: 4.6705% ( 589) 00:15:13.416 2.086 - 2.098: 33.9307% ( 3690) 00:15:13.416 2.098 - 2.110: 38.1889% ( 537) 00:15:13.416 2.110 - 2.121: 42.8118% ( 583) 00:15:13.416 2.121 - 2.133: 47.9740% ( 651) 00:15:13.416 2.133 - 2.145: 49.0286% ( 133) 00:15:13.416 2.145 - 2.157: 55.2216% ( 781) 00:15:13.416 2.157 - 2.169: 64.6499% ( 1189) 00:15:13.416 2.169 - 2.181: 65.8235% ( 148) 00:15:13.416 2.181 - 2.193: 67.9565% ( 269) 00:15:13.416 2.193 - 2.204: 69.8834% ( 243) 00:15:13.416 2.204 - 2.216: 70.3037% ( 53) 00:15:13.416 2.216 - 2.228: 75.0218% ( 595) 00:15:13.416 2.228 - 2.240: 83.1576% ( 1026) 00:15:13.416 2.240 - 2.252: 86.0677% ( 367) 00:15:13.416 2.252 - 2.264: 88.6290% ( 323) 00:15:13.416 2.264 - 2.276: 90.6589% ( 256) 00:15:13.416 2.276 - 2.287: 91.3488% ( 87) 00:15:13.416 2.287 - 2.299: 91.8880% ( 68) 00:15:13.416 2.299 - 2.311: 92.4510% ( 71) 00:15:13.416 2.311 - 2.323: 93.6643% ( 153) 00:15:13.416 2.323 - 2.335: 94.3145% ( 82) 00:15:13.416 2.335 - 2.347: 94.4731% ( 20) 00:15:13.416 2.347 - 2.359: 94.5207% ( 6) 00:15:13.416 2.359 - 2.370: 94.6317% ( 14) 00:15:13.416 2.370 - 2.382: 94.7823% ( 19) 00:15:13.416 2.382 - 2.394: 95.2898% ( 64) 00:15:13.416 2.394 - 2.406: 96.0352% ( 94) 00:15:13.416 2.406 - 2.418: 96.3524% ( 40) 00:15:13.416 2.418 - 2.430: 96.6299% ( 35) 00:15:13.416 2.430 - 2.441: 96.8282% ( 25) 00:15:13.416 2.441 - 2.453: 96.9788% ( 19) 00:15:13.416 2.453 - 2.465: 97.1533% ( 22) 00:15:13.416 2.465 - 2.477: 97.2643% ( 14) 00:15:13.416 2.477 - 2.489: 97.3753% ( 14) 00:15:13.416 2.489 - 2.501: 97.4546% ( 10) 00:15:13.416 2.501 - 2.513: 97.5498% ( 12) 00:15:13.416 2.513 - 2.524: 97.5735% ( 3) 00:15:13.416 2.524 - 2.536: 97.6528% ( 10) 00:15:13.416 2.536 - 2.548: 97.6925% ( 5) 00:15:13.416 2.548 - 2.560: 97.7083% ( 2) 00:15:13.416 2.560 - 2.572: 97.7321% ( 3) 00:15:13.416 2.572 - 2.584: 97.7639% ( 4) 00:15:13.416 2.584 - 2.596: 97.7876% ( 3) 00:15:13.416 2.596 - 2.607: 97.8035% ( 2) 00:15:13.416 2.607 - 2.619: 97.8273% ( 3) 00:15:13.416 2.631 - 2.643: 97.8432% ( 2) 00:15:13.416 2.643 - 2.655: 97.8590% ( 2) 00:15:13.416 2.655 - 2.667: 97.8828% ( 3) 00:15:13.416 2.667 - 2.679: 97.9224% ( 5) 00:15:13.416 2.679 - 2.690: 97.9383% ( 2) 00:15:13.416 2.690 - 2.702: 97.9462% ( 1) 00:15:13.416 2.702 - 2.714: 97.9542% ( 1) 00:15:13.416 2.714 - 2.726: 97.9621% ( 1) 00:15:13.416 2.726 - 2.738: 97.9780% ( 2) 00:15:13.416 2.738 - 2.750: 98.0017% ( 3) 00:15:13.416 2.750 - 2.761: 98.0097% ( 1) 00:15:13.416 2.761 - 2.773: 98.0176% ( 1) 00:15:13.416 2.773 - 2.785: 98.0255% ( 1) 00:15:13.416 2.785 - 2.797: 98.0414% ( 2) 00:15:13.416 2.797 - 2.809: 98.0573% ( 2) 00:15:13.416 2.809 - 2.821: 98.0652% ( 1) 00:15:13.416 2.833 - 2.844: 98.0969% ( 4) 00:15:13.416 2.844 - 2.856: 98.1048% ( 1) 00:15:13.416 2.880 - 2.892: 98.1128% ( 1) 00:15:13.416 2.927 - 2.939: 98.1286% ( 2) 00:15:13.416 2.939 - 2.951: 98.1365% ( 1) 00:15:13.416 2.951 - 2.963: 98.1445% ( 1) 00:15:13.416 2.963 - 2.975: 9[2024-11-19 10:43:00.620053] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:13.416 8.1524% ( 1) 00:15:13.416 2.975 - 2.987: 98.1603% ( 1) 00:15:13.416 2.999 - 3.010: 98.1841% ( 3) 00:15:13.416 3.010 - 3.022: 98.1921% ( 1) 00:15:13.416 3.058 - 3.081: 98.2000% ( 1) 00:15:13.416 3.081 - 3.105: 98.2079% ( 1) 00:15:13.416 3.105 - 3.129: 98.2238% ( 2) 00:15:13.416 3.200 - 3.224: 98.2396% ( 2) 00:15:13.416 3.271 - 3.295: 98.2476% ( 1) 00:15:13.416 3.295 - 3.319: 98.2555% ( 1) 00:15:13.416 3.366 - 3.390: 98.2714% ( 2) 00:15:13.416 3.390 - 3.413: 98.2793% ( 1) 00:15:13.416 3.413 - 3.437: 98.3031% ( 3) 00:15:13.417 3.484 - 3.508: 98.3110% ( 1) 00:15:13.417 3.508 - 3.532: 98.3269% ( 2) 00:15:13.417 3.556 - 3.579: 98.3506% ( 3) 00:15:13.417 3.579 - 3.603: 98.3665% ( 2) 00:15:13.417 3.674 - 3.698: 98.3982% ( 4) 00:15:13.417 3.698 - 3.721: 98.4141% ( 2) 00:15:13.417 3.745 - 3.769: 98.4299% ( 2) 00:15:13.417 3.769 - 3.793: 98.4379% ( 1) 00:15:13.417 3.816 - 3.840: 98.4458% ( 1) 00:15:13.417 3.887 - 3.911: 98.4617% ( 2) 00:15:13.417 3.935 - 3.959: 98.4775% ( 2) 00:15:13.417 3.959 - 3.982: 98.5013% ( 3) 00:15:13.417 3.982 - 4.006: 98.5172% ( 2) 00:15:13.417 4.053 - 4.077: 98.5251% ( 1) 00:15:13.417 4.148 - 4.172: 98.5330% ( 1) 00:15:13.417 4.172 - 4.196: 98.5410% ( 1) 00:15:13.417 4.196 - 4.219: 98.5489% ( 1) 00:15:13.417 4.219 - 4.243: 98.5568% ( 1) 00:15:13.417 4.314 - 4.338: 98.5647% ( 1) 00:15:13.417 4.480 - 4.504: 98.5727% ( 1) 00:15:13.417 5.428 - 5.452: 98.5806% ( 1) 00:15:13.417 5.547 - 5.570: 98.5965% ( 2) 00:15:13.417 5.665 - 5.689: 98.6044% ( 1) 00:15:13.417 5.997 - 6.021: 98.6123% ( 1) 00:15:13.417 6.116 - 6.163: 98.6361% ( 3) 00:15:13.417 6.353 - 6.400: 98.6440% ( 1) 00:15:13.417 6.400 - 6.447: 98.6520% ( 1) 00:15:13.417 6.827 - 6.874: 98.6599% ( 1) 00:15:13.417 6.874 - 6.921: 98.6678% ( 1) 00:15:13.417 7.064 - 7.111: 98.6837% ( 2) 00:15:13.417 7.253 - 7.301: 98.6916% ( 1) 00:15:13.417 7.443 - 7.490: 98.6995% ( 1) 00:15:13.417 7.633 - 7.680: 98.7075% ( 1) 00:15:13.417 7.680 - 7.727: 98.7154% ( 1) 00:15:13.417 7.917 - 7.964: 98.7233% ( 1) 00:15:13.417 7.964 - 8.012: 98.7313% ( 1) 00:15:13.417 8.676 - 8.723: 98.7392% ( 1) 00:15:13.417 9.434 - 9.481: 98.7471% ( 1) 00:15:13.417 9.481 - 9.529: 98.7551% ( 1) 00:15:13.417 10.382 - 10.430: 98.7630% ( 1) 00:15:13.417 15.265 - 15.360: 98.7709% ( 1) 00:15:13.417 15.550 - 15.644: 98.7868% ( 2) 00:15:13.417 15.739 - 15.834: 98.8185% ( 4) 00:15:13.417 15.834 - 15.929: 98.8264% ( 1) 00:15:13.417 15.929 - 16.024: 98.8344% ( 1) 00:15:13.417 16.024 - 16.119: 98.8581% ( 3) 00:15:13.417 16.119 - 16.213: 98.8978% ( 5) 00:15:13.417 16.213 - 16.308: 98.9374% ( 5) 00:15:13.417 16.308 - 16.403: 98.9454% ( 1) 00:15:13.417 16.403 - 16.498: 98.9692% ( 3) 00:15:13.417 16.498 - 16.593: 99.0167% ( 6) 00:15:13.417 16.593 - 16.687: 99.0326% ( 2) 00:15:13.417 16.687 - 16.782: 99.0802% ( 6) 00:15:13.417 16.782 - 16.877: 99.1277% ( 6) 00:15:13.417 16.877 - 16.972: 99.1833% ( 7) 00:15:13.417 16.972 - 17.067: 99.2070% ( 3) 00:15:13.417 17.067 - 17.161: 99.2308% ( 3) 00:15:13.417 17.161 - 17.256: 99.2546% ( 3) 00:15:13.417 17.256 - 17.351: 99.2705% ( 2) 00:15:13.417 17.351 - 17.446: 99.2943% ( 3) 00:15:13.417 17.636 - 17.730: 99.3181% ( 3) 00:15:13.417 17.825 - 17.920: 99.3339% ( 2) 00:15:13.417 17.920 - 18.015: 99.3418% ( 1) 00:15:13.417 18.110 - 18.204: 99.3577% ( 2) 00:15:13.417 19.437 - 19.532: 99.3656% ( 1) 00:15:13.417 19.627 - 19.721: 99.3736% ( 1) 00:15:13.417 28.255 - 28.444: 99.3815% ( 1) 00:15:13.417 3786.524 - 3810.797: 99.3894% ( 1) 00:15:13.417 3980.705 - 4004.978: 99.9128% ( 66) 00:15:13.417 4004.978 - 4029.250: 100.0000% ( 11) 00:15:13.417 00:15:13.417 10:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:13.417 10:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:13.417 10:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:13.417 10:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:13.417 10:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:13.417 [ 00:15:13.417 { 00:15:13.417 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:13.417 "subtype": "Discovery", 00:15:13.417 "listen_addresses": [], 00:15:13.417 "allow_any_host": true, 00:15:13.417 "hosts": [] 00:15:13.417 }, 00:15:13.417 { 00:15:13.417 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:13.417 "subtype": "NVMe", 00:15:13.417 "listen_addresses": [ 00:15:13.417 { 00:15:13.417 "trtype": "VFIOUSER", 00:15:13.417 "adrfam": "IPv4", 00:15:13.417 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:13.417 "trsvcid": "0" 00:15:13.417 } 00:15:13.417 ], 00:15:13.417 "allow_any_host": true, 00:15:13.417 "hosts": [], 00:15:13.417 "serial_number": "SPDK1", 00:15:13.417 "model_number": "SPDK bdev Controller", 00:15:13.417 "max_namespaces": 32, 00:15:13.417 "min_cntlid": 1, 00:15:13.417 "max_cntlid": 65519, 00:15:13.417 "namespaces": [ 00:15:13.417 { 00:15:13.417 "nsid": 1, 00:15:13.417 "bdev_name": "Malloc1", 00:15:13.417 "name": "Malloc1", 00:15:13.417 "nguid": "D1FF4F8E84F74B039744EACC006CC5C2", 00:15:13.417 "uuid": "d1ff4f8e-84f7-4b03-9744-eacc006cc5c2" 00:15:13.417 } 00:15:13.417 ] 00:15:13.417 }, 00:15:13.417 { 00:15:13.417 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:13.417 "subtype": "NVMe", 00:15:13.417 "listen_addresses": [ 00:15:13.417 { 00:15:13.417 "trtype": "VFIOUSER", 00:15:13.417 "adrfam": "IPv4", 00:15:13.417 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:13.417 "trsvcid": "0" 00:15:13.417 } 00:15:13.417 ], 00:15:13.417 "allow_any_host": true, 00:15:13.417 "hosts": [], 00:15:13.417 "serial_number": "SPDK2", 00:15:13.417 "model_number": "SPDK bdev Controller", 00:15:13.417 "max_namespaces": 32, 00:15:13.417 "min_cntlid": 1, 00:15:13.417 "max_cntlid": 65519, 00:15:13.417 "namespaces": [ 00:15:13.417 { 00:15:13.417 "nsid": 1, 00:15:13.417 "bdev_name": "Malloc2", 00:15:13.417 "name": "Malloc2", 00:15:13.417 "nguid": "B030DCFCA99E423A9EBAD2FD904608CE", 00:15:13.417 "uuid": "b030dcfc-a99e-423a-9eba-d2fd904608ce" 00:15:13.417 } 00:15:13.417 ] 00:15:13.417 } 00:15:13.417 ] 00:15:13.417 10:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:13.417 10:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1320907 00:15:13.417 10:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:13.417 10:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:13.417 10:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:13.417 10:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:13.417 10:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:13.417 10:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:13.417 10:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:13.417 10:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:13.704 [2024-11-19 10:43:01.166782] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:13.704 Malloc3 00:15:13.704 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:14.270 [2024-11-19 10:43:01.583941] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:14.270 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:14.270 Asynchronous Event Request test 00:15:14.270 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:14.270 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:14.270 Registering asynchronous event callbacks... 00:15:14.270 Starting namespace attribute notice tests for all controllers... 00:15:14.270 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:14.270 aer_cb - Changed Namespace 00:15:14.270 Cleaning up... 00:15:14.270 [ 00:15:14.270 { 00:15:14.270 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:14.270 "subtype": "Discovery", 00:15:14.270 "listen_addresses": [], 00:15:14.270 "allow_any_host": true, 00:15:14.270 "hosts": [] 00:15:14.270 }, 00:15:14.270 { 00:15:14.270 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:14.270 "subtype": "NVMe", 00:15:14.270 "listen_addresses": [ 00:15:14.270 { 00:15:14.270 "trtype": "VFIOUSER", 00:15:14.270 "adrfam": "IPv4", 00:15:14.270 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:14.270 "trsvcid": "0" 00:15:14.270 } 00:15:14.270 ], 00:15:14.270 "allow_any_host": true, 00:15:14.270 "hosts": [], 00:15:14.270 "serial_number": "SPDK1", 00:15:14.270 "model_number": "SPDK bdev Controller", 00:15:14.270 "max_namespaces": 32, 00:15:14.270 "min_cntlid": 1, 00:15:14.270 "max_cntlid": 65519, 00:15:14.270 "namespaces": [ 00:15:14.270 { 00:15:14.270 "nsid": 1, 00:15:14.270 "bdev_name": "Malloc1", 00:15:14.271 "name": "Malloc1", 00:15:14.271 "nguid": "D1FF4F8E84F74B039744EACC006CC5C2", 00:15:14.271 "uuid": "d1ff4f8e-84f7-4b03-9744-eacc006cc5c2" 00:15:14.271 }, 00:15:14.271 { 00:15:14.271 "nsid": 2, 00:15:14.271 "bdev_name": "Malloc3", 00:15:14.271 "name": "Malloc3", 00:15:14.271 "nguid": "96C15DF39ACB46D19A23E4AB743EF44E", 00:15:14.271 "uuid": "96c15df3-9acb-46d1-9a23-e4ab743ef44e" 00:15:14.271 } 00:15:14.271 ] 00:15:14.271 }, 00:15:14.271 { 00:15:14.271 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:14.271 "subtype": "NVMe", 00:15:14.271 "listen_addresses": [ 00:15:14.271 { 00:15:14.271 "trtype": "VFIOUSER", 00:15:14.271 "adrfam": "IPv4", 00:15:14.271 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:14.271 "trsvcid": "0" 00:15:14.271 } 00:15:14.271 ], 00:15:14.271 "allow_any_host": true, 00:15:14.271 "hosts": [], 00:15:14.271 "serial_number": "SPDK2", 00:15:14.271 "model_number": "SPDK bdev Controller", 00:15:14.271 "max_namespaces": 32, 00:15:14.271 "min_cntlid": 1, 00:15:14.271 "max_cntlid": 65519, 00:15:14.271 "namespaces": [ 00:15:14.271 { 00:15:14.271 "nsid": 1, 00:15:14.271 "bdev_name": "Malloc2", 00:15:14.271 "name": "Malloc2", 00:15:14.271 "nguid": "B030DCFCA99E423A9EBAD2FD904608CE", 00:15:14.271 "uuid": "b030dcfc-a99e-423a-9eba-d2fd904608ce" 00:15:14.271 } 00:15:14.271 ] 00:15:14.271 } 00:15:14.271 ] 00:15:14.271 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1320907 00:15:14.271 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:14.271 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:14.271 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:14.271 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:14.271 [2024-11-19 10:43:01.889321] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:15:14.271 [2024-11-19 10:43:01.889365] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1321040 ] 00:15:14.531 [2024-11-19 10:43:01.936896] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:14.531 [2024-11-19 10:43:01.945653] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:14.531 [2024-11-19 10:43:01.945688] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fecc069c000 00:15:14.531 [2024-11-19 10:43:01.946635] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.531 [2024-11-19 10:43:01.947639] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.531 [2024-11-19 10:43:01.948664] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.531 [2024-11-19 10:43:01.949661] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:14.531 [2024-11-19 10:43:01.950665] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:14.531 [2024-11-19 10:43:01.951676] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.531 [2024-11-19 10:43:01.952679] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:14.531 [2024-11-19 10:43:01.953681] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.531 [2024-11-19 10:43:01.954688] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:14.531 [2024-11-19 10:43:01.954710] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fecc0691000 00:15:14.531 [2024-11-19 10:43:01.956022] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:14.531 [2024-11-19 10:43:01.972412] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:14.531 [2024-11-19 10:43:01.972458] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:14.531 [2024-11-19 10:43:01.977575] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:14.531 [2024-11-19 10:43:01.977646] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:14.531 [2024-11-19 10:43:01.977737] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:14.531 [2024-11-19 10:43:01.977763] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:14.531 [2024-11-19 10:43:01.977775] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:14.531 [2024-11-19 10:43:01.978578] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:14.531 [2024-11-19 10:43:01.978614] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:14.532 [2024-11-19 10:43:01.978627] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:14.532 [2024-11-19 10:43:01.979587] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:14.532 [2024-11-19 10:43:01.979624] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:14.532 [2024-11-19 10:43:01.979639] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:14.532 [2024-11-19 10:43:01.980595] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:14.532 [2024-11-19 10:43:01.980630] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:14.532 [2024-11-19 10:43:01.981604] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:14.532 [2024-11-19 10:43:01.981625] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:14.532 [2024-11-19 10:43:01.981635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:14.532 [2024-11-19 10:43:01.981662] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:14.532 [2024-11-19 10:43:01.981772] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:14.532 [2024-11-19 10:43:01.981780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:14.532 [2024-11-19 10:43:01.981788] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:14.532 [2024-11-19 10:43:01.982621] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:14.532 [2024-11-19 10:43:01.983622] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:14.532 [2024-11-19 10:43:01.984631] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:14.532 [2024-11-19 10:43:01.985624] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:14.532 [2024-11-19 10:43:01.985709] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:14.532 [2024-11-19 10:43:01.986635] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:14.532 [2024-11-19 10:43:01.986656] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:14.532 [2024-11-19 10:43:01.986667] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:14.532 [2024-11-19 10:43:01.986692] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:14.532 [2024-11-19 10:43:01.986714] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:14.532 [2024-11-19 10:43:01.986740] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:14.532 [2024-11-19 10:43:01.986751] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:14.532 [2024-11-19 10:43:01.986758] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:14.532 [2024-11-19 10:43:01.986779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:14.532 [2024-11-19 10:43:01.995321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:14.532 [2024-11-19 10:43:01.995347] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:14.532 [2024-11-19 10:43:01.995356] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:14.532 [2024-11-19 10:43:01.995363] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:14.532 [2024-11-19 10:43:01.995372] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:14.532 [2024-11-19 10:43:01.995384] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:14.532 [2024-11-19 10:43:01.995394] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:14.532 [2024-11-19 10:43:01.995403] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:14.532 [2024-11-19 10:43:01.995419] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:14.532 [2024-11-19 10:43:01.995437] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:14.532 [2024-11-19 10:43:02.003316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:14.532 [2024-11-19 10:43:02.003341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.532 [2024-11-19 10:43:02.003355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.532 [2024-11-19 10:43:02.003367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.532 [2024-11-19 10:43:02.003379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.532 [2024-11-19 10:43:02.003388] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:14.532 [2024-11-19 10:43:02.003400] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:14.532 [2024-11-19 10:43:02.003414] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:14.532 [2024-11-19 10:43:02.011316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:14.532 [2024-11-19 10:43:02.011341] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:14.532 [2024-11-19 10:43:02.011355] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:14.532 [2024-11-19 10:43:02.011367] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:14.532 [2024-11-19 10:43:02.011377] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:14.532 [2024-11-19 10:43:02.011391] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:14.532 [2024-11-19 10:43:02.019314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:14.532 [2024-11-19 10:43:02.019391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:14.532 [2024-11-19 10:43:02.019409] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:14.532 [2024-11-19 10:43:02.019423] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:14.532 [2024-11-19 10:43:02.019432] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:14.532 [2024-11-19 10:43:02.019438] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:14.532 [2024-11-19 10:43:02.019448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:14.532 [2024-11-19 10:43:02.027316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:14.532 [2024-11-19 10:43:02.027340] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:14.532 [2024-11-19 10:43:02.027361] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:14.532 [2024-11-19 10:43:02.027377] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:14.532 [2024-11-19 10:43:02.027390] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:14.532 [2024-11-19 10:43:02.027399] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:14.533 [2024-11-19 10:43:02.027405] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:14.533 [2024-11-19 10:43:02.027414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:14.533 [2024-11-19 10:43:02.035316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:14.533 [2024-11-19 10:43:02.035347] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:14.533 [2024-11-19 10:43:02.035364] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:14.533 [2024-11-19 10:43:02.035377] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:14.533 [2024-11-19 10:43:02.035386] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:14.533 [2024-11-19 10:43:02.035393] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:14.533 [2024-11-19 10:43:02.035402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:14.533 [2024-11-19 10:43:02.043328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:14.533 [2024-11-19 10:43:02.043351] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:14.533 [2024-11-19 10:43:02.043364] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:14.533 [2024-11-19 10:43:02.043380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:14.533 [2024-11-19 10:43:02.043392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:14.533 [2024-11-19 10:43:02.043401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:14.533 [2024-11-19 10:43:02.043409] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:14.533 [2024-11-19 10:43:02.043418] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:14.533 [2024-11-19 10:43:02.043426] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:14.533 [2024-11-19 10:43:02.043435] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:14.533 [2024-11-19 10:43:02.043461] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:14.533 [2024-11-19 10:43:02.051312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:14.533 [2024-11-19 10:43:02.051340] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:14.533 [2024-11-19 10:43:02.059316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:14.533 [2024-11-19 10:43:02.059341] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:14.533 [2024-11-19 10:43:02.067314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:14.533 [2024-11-19 10:43:02.067339] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:14.533 [2024-11-19 10:43:02.075332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:14.533 [2024-11-19 10:43:02.075367] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:14.533 [2024-11-19 10:43:02.075379] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:14.533 [2024-11-19 10:43:02.075386] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:14.533 [2024-11-19 10:43:02.075392] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:14.533 [2024-11-19 10:43:02.075398] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:14.533 [2024-11-19 10:43:02.075408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:14.533 [2024-11-19 10:43:02.075421] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:14.533 [2024-11-19 10:43:02.075430] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:14.533 [2024-11-19 10:43:02.075436] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:14.533 [2024-11-19 10:43:02.075449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:14.533 [2024-11-19 10:43:02.075462] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:14.533 [2024-11-19 10:43:02.075471] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:14.533 [2024-11-19 10:43:02.075477] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:14.533 [2024-11-19 10:43:02.075486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:14.533 [2024-11-19 10:43:02.075499] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:14.533 [2024-11-19 10:43:02.075507] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:14.533 [2024-11-19 10:43:02.075513] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:14.533 [2024-11-19 10:43:02.075522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:14.533 [2024-11-19 10:43:02.083314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:14.533 [2024-11-19 10:43:02.083342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:14.533 [2024-11-19 10:43:02.083359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:14.533 [2024-11-19 10:43:02.083372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:14.533 ===================================================== 00:15:14.533 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:14.533 ===================================================== 00:15:14.533 Controller Capabilities/Features 00:15:14.533 ================================ 00:15:14.533 Vendor ID: 4e58 00:15:14.533 Subsystem Vendor ID: 4e58 00:15:14.533 Serial Number: SPDK2 00:15:14.533 Model Number: SPDK bdev Controller 00:15:14.533 Firmware Version: 25.01 00:15:14.533 Recommended Arb Burst: 6 00:15:14.533 IEEE OUI Identifier: 8d 6b 50 00:15:14.533 Multi-path I/O 00:15:14.533 May have multiple subsystem ports: Yes 00:15:14.533 May have multiple controllers: Yes 00:15:14.533 Associated with SR-IOV VF: No 00:15:14.533 Max Data Transfer Size: 131072 00:15:14.533 Max Number of Namespaces: 32 00:15:14.533 Max Number of I/O Queues: 127 00:15:14.533 NVMe Specification Version (VS): 1.3 00:15:14.533 NVMe Specification Version (Identify): 1.3 00:15:14.533 Maximum Queue Entries: 256 00:15:14.533 Contiguous Queues Required: Yes 00:15:14.533 Arbitration Mechanisms Supported 00:15:14.533 Weighted Round Robin: Not Supported 00:15:14.533 Vendor Specific: Not Supported 00:15:14.533 Reset Timeout: 15000 ms 00:15:14.533 Doorbell Stride: 4 bytes 00:15:14.533 NVM Subsystem Reset: Not Supported 00:15:14.533 Command Sets Supported 00:15:14.533 NVM Command Set: Supported 00:15:14.533 Boot Partition: Not Supported 00:15:14.533 Memory Page Size Minimum: 4096 bytes 00:15:14.533 Memory Page Size Maximum: 4096 bytes 00:15:14.533 Persistent Memory Region: Not Supported 00:15:14.533 Optional Asynchronous Events Supported 00:15:14.533 Namespace Attribute Notices: Supported 00:15:14.533 Firmware Activation Notices: Not Supported 00:15:14.533 ANA Change Notices: Not Supported 00:15:14.533 PLE Aggregate Log Change Notices: Not Supported 00:15:14.533 LBA Status Info Alert Notices: Not Supported 00:15:14.533 EGE Aggregate Log Change Notices: Not Supported 00:15:14.533 Normal NVM Subsystem Shutdown event: Not Supported 00:15:14.533 Zone Descriptor Change Notices: Not Supported 00:15:14.534 Discovery Log Change Notices: Not Supported 00:15:14.534 Controller Attributes 00:15:14.534 128-bit Host Identifier: Supported 00:15:14.534 Non-Operational Permissive Mode: Not Supported 00:15:14.534 NVM Sets: Not Supported 00:15:14.534 Read Recovery Levels: Not Supported 00:15:14.534 Endurance Groups: Not Supported 00:15:14.534 Predictable Latency Mode: Not Supported 00:15:14.534 Traffic Based Keep ALive: Not Supported 00:15:14.534 Namespace Granularity: Not Supported 00:15:14.534 SQ Associations: Not Supported 00:15:14.534 UUID List: Not Supported 00:15:14.534 Multi-Domain Subsystem: Not Supported 00:15:14.534 Fixed Capacity Management: Not Supported 00:15:14.534 Variable Capacity Management: Not Supported 00:15:14.534 Delete Endurance Group: Not Supported 00:15:14.534 Delete NVM Set: Not Supported 00:15:14.534 Extended LBA Formats Supported: Not Supported 00:15:14.534 Flexible Data Placement Supported: Not Supported 00:15:14.534 00:15:14.534 Controller Memory Buffer Support 00:15:14.534 ================================ 00:15:14.534 Supported: No 00:15:14.534 00:15:14.534 Persistent Memory Region Support 00:15:14.534 ================================ 00:15:14.534 Supported: No 00:15:14.534 00:15:14.534 Admin Command Set Attributes 00:15:14.534 ============================ 00:15:14.534 Security Send/Receive: Not Supported 00:15:14.534 Format NVM: Not Supported 00:15:14.534 Firmware Activate/Download: Not Supported 00:15:14.534 Namespace Management: Not Supported 00:15:14.534 Device Self-Test: Not Supported 00:15:14.534 Directives: Not Supported 00:15:14.534 NVMe-MI: Not Supported 00:15:14.534 Virtualization Management: Not Supported 00:15:14.534 Doorbell Buffer Config: Not Supported 00:15:14.534 Get LBA Status Capability: Not Supported 00:15:14.534 Command & Feature Lockdown Capability: Not Supported 00:15:14.534 Abort Command Limit: 4 00:15:14.534 Async Event Request Limit: 4 00:15:14.534 Number of Firmware Slots: N/A 00:15:14.534 Firmware Slot 1 Read-Only: N/A 00:15:14.534 Firmware Activation Without Reset: N/A 00:15:14.534 Multiple Update Detection Support: N/A 00:15:14.534 Firmware Update Granularity: No Information Provided 00:15:14.534 Per-Namespace SMART Log: No 00:15:14.534 Asymmetric Namespace Access Log Page: Not Supported 00:15:14.534 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:14.534 Command Effects Log Page: Supported 00:15:14.534 Get Log Page Extended Data: Supported 00:15:14.534 Telemetry Log Pages: Not Supported 00:15:14.534 Persistent Event Log Pages: Not Supported 00:15:14.534 Supported Log Pages Log Page: May Support 00:15:14.534 Commands Supported & Effects Log Page: Not Supported 00:15:14.534 Feature Identifiers & Effects Log Page:May Support 00:15:14.534 NVMe-MI Commands & Effects Log Page: May Support 00:15:14.534 Data Area 4 for Telemetry Log: Not Supported 00:15:14.534 Error Log Page Entries Supported: 128 00:15:14.534 Keep Alive: Supported 00:15:14.534 Keep Alive Granularity: 10000 ms 00:15:14.534 00:15:14.534 NVM Command Set Attributes 00:15:14.534 ========================== 00:15:14.534 Submission Queue Entry Size 00:15:14.534 Max: 64 00:15:14.534 Min: 64 00:15:14.534 Completion Queue Entry Size 00:15:14.534 Max: 16 00:15:14.534 Min: 16 00:15:14.534 Number of Namespaces: 32 00:15:14.534 Compare Command: Supported 00:15:14.534 Write Uncorrectable Command: Not Supported 00:15:14.534 Dataset Management Command: Supported 00:15:14.534 Write Zeroes Command: Supported 00:15:14.534 Set Features Save Field: Not Supported 00:15:14.534 Reservations: Not Supported 00:15:14.534 Timestamp: Not Supported 00:15:14.534 Copy: Supported 00:15:14.534 Volatile Write Cache: Present 00:15:14.534 Atomic Write Unit (Normal): 1 00:15:14.534 Atomic Write Unit (PFail): 1 00:15:14.534 Atomic Compare & Write Unit: 1 00:15:14.534 Fused Compare & Write: Supported 00:15:14.534 Scatter-Gather List 00:15:14.534 SGL Command Set: Supported (Dword aligned) 00:15:14.534 SGL Keyed: Not Supported 00:15:14.534 SGL Bit Bucket Descriptor: Not Supported 00:15:14.534 SGL Metadata Pointer: Not Supported 00:15:14.534 Oversized SGL: Not Supported 00:15:14.534 SGL Metadata Address: Not Supported 00:15:14.534 SGL Offset: Not Supported 00:15:14.534 Transport SGL Data Block: Not Supported 00:15:14.534 Replay Protected Memory Block: Not Supported 00:15:14.534 00:15:14.534 Firmware Slot Information 00:15:14.534 ========================= 00:15:14.534 Active slot: 1 00:15:14.534 Slot 1 Firmware Revision: 25.01 00:15:14.534 00:15:14.534 00:15:14.534 Commands Supported and Effects 00:15:14.534 ============================== 00:15:14.534 Admin Commands 00:15:14.534 -------------- 00:15:14.534 Get Log Page (02h): Supported 00:15:14.534 Identify (06h): Supported 00:15:14.534 Abort (08h): Supported 00:15:14.534 Set Features (09h): Supported 00:15:14.534 Get Features (0Ah): Supported 00:15:14.534 Asynchronous Event Request (0Ch): Supported 00:15:14.534 Keep Alive (18h): Supported 00:15:14.534 I/O Commands 00:15:14.534 ------------ 00:15:14.534 Flush (00h): Supported LBA-Change 00:15:14.534 Write (01h): Supported LBA-Change 00:15:14.534 Read (02h): Supported 00:15:14.534 Compare (05h): Supported 00:15:14.534 Write Zeroes (08h): Supported LBA-Change 00:15:14.534 Dataset Management (09h): Supported LBA-Change 00:15:14.534 Copy (19h): Supported LBA-Change 00:15:14.534 00:15:14.534 Error Log 00:15:14.534 ========= 00:15:14.534 00:15:14.534 Arbitration 00:15:14.534 =========== 00:15:14.534 Arbitration Burst: 1 00:15:14.534 00:15:14.534 Power Management 00:15:14.534 ================ 00:15:14.534 Number of Power States: 1 00:15:14.534 Current Power State: Power State #0 00:15:14.534 Power State #0: 00:15:14.534 Max Power: 0.00 W 00:15:14.534 Non-Operational State: Operational 00:15:14.534 Entry Latency: Not Reported 00:15:14.534 Exit Latency: Not Reported 00:15:14.534 Relative Read Throughput: 0 00:15:14.534 Relative Read Latency: 0 00:15:14.534 Relative Write Throughput: 0 00:15:14.534 Relative Write Latency: 0 00:15:14.534 Idle Power: Not Reported 00:15:14.534 Active Power: Not Reported 00:15:14.534 Non-Operational Permissive Mode: Not Supported 00:15:14.534 00:15:14.534 Health Information 00:15:14.534 ================== 00:15:14.534 Critical Warnings: 00:15:14.534 Available Spare Space: OK 00:15:14.534 Temperature: OK 00:15:14.534 Device Reliability: OK 00:15:14.534 Read Only: No 00:15:14.534 Volatile Memory Backup: OK 00:15:14.534 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:14.534 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:14.534 Available Spare: 0% 00:15:14.534 Available Sp[2024-11-19 10:43:02.083488] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:14.534 [2024-11-19 10:43:02.091315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:14.534 [2024-11-19 10:43:02.091363] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:14.534 [2024-11-19 10:43:02.091381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.535 [2024-11-19 10:43:02.091392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.535 [2024-11-19 10:43:02.091402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.535 [2024-11-19 10:43:02.091411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.535 [2024-11-19 10:43:02.091499] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:14.535 [2024-11-19 10:43:02.091521] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:14.535 [2024-11-19 10:43:02.092499] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:14.535 [2024-11-19 10:43:02.092588] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:14.535 [2024-11-19 10:43:02.092618] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:14.535 [2024-11-19 10:43:02.093510] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:14.535 [2024-11-19 10:43:02.093534] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:14.535 [2024-11-19 10:43:02.093608] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:14.535 [2024-11-19 10:43:02.094792] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:14.535 are Threshold: 0% 00:15:14.535 Life Percentage Used: 0% 00:15:14.535 Data Units Read: 0 00:15:14.535 Data Units Written: 0 00:15:14.535 Host Read Commands: 0 00:15:14.535 Host Write Commands: 0 00:15:14.535 Controller Busy Time: 0 minutes 00:15:14.535 Power Cycles: 0 00:15:14.535 Power On Hours: 0 hours 00:15:14.535 Unsafe Shutdowns: 0 00:15:14.535 Unrecoverable Media Errors: 0 00:15:14.535 Lifetime Error Log Entries: 0 00:15:14.535 Warning Temperature Time: 0 minutes 00:15:14.535 Critical Temperature Time: 0 minutes 00:15:14.535 00:15:14.535 Number of Queues 00:15:14.535 ================ 00:15:14.535 Number of I/O Submission Queues: 127 00:15:14.535 Number of I/O Completion Queues: 127 00:15:14.535 00:15:14.535 Active Namespaces 00:15:14.535 ================= 00:15:14.535 Namespace ID:1 00:15:14.535 Error Recovery Timeout: Unlimited 00:15:14.535 Command Set Identifier: NVM (00h) 00:15:14.535 Deallocate: Supported 00:15:14.535 Deallocated/Unwritten Error: Not Supported 00:15:14.535 Deallocated Read Value: Unknown 00:15:14.535 Deallocate in Write Zeroes: Not Supported 00:15:14.535 Deallocated Guard Field: 0xFFFF 00:15:14.535 Flush: Supported 00:15:14.535 Reservation: Supported 00:15:14.535 Namespace Sharing Capabilities: Multiple Controllers 00:15:14.535 Size (in LBAs): 131072 (0GiB) 00:15:14.535 Capacity (in LBAs): 131072 (0GiB) 00:15:14.535 Utilization (in LBAs): 131072 (0GiB) 00:15:14.535 NGUID: B030DCFCA99E423A9EBAD2FD904608CE 00:15:14.535 UUID: b030dcfc-a99e-423a-9eba-d2fd904608ce 00:15:14.535 Thin Provisioning: Not Supported 00:15:14.535 Per-NS Atomic Units: Yes 00:15:14.535 Atomic Boundary Size (Normal): 0 00:15:14.535 Atomic Boundary Size (PFail): 0 00:15:14.535 Atomic Boundary Offset: 0 00:15:14.535 Maximum Single Source Range Length: 65535 00:15:14.535 Maximum Copy Length: 65535 00:15:14.535 Maximum Source Range Count: 1 00:15:14.535 NGUID/EUI64 Never Reused: No 00:15:14.535 Namespace Write Protected: No 00:15:14.535 Number of LBA Formats: 1 00:15:14.535 Current LBA Format: LBA Format #00 00:15:14.535 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:14.535 00:15:14.535 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:14.793 [2024-11-19 10:43:02.344004] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:20.056 Initializing NVMe Controllers 00:15:20.056 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:20.056 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:20.056 Initialization complete. Launching workers. 00:15:20.056 ======================================================== 00:15:20.056 Latency(us) 00:15:20.056 Device Information : IOPS MiB/s Average min max 00:15:20.056 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33931.27 132.54 3771.44 1167.08 10643.52 00:15:20.056 ======================================================== 00:15:20.056 Total : 33931.27 132.54 3771.44 1167.08 10643.52 00:15:20.056 00:15:20.056 [2024-11-19 10:43:07.453705] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:20.056 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:20.314 [2024-11-19 10:43:07.707406] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:25.579 Initializing NVMe Controllers 00:15:25.579 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:25.579 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:25.579 Initialization complete. Launching workers. 00:15:25.579 ======================================================== 00:15:25.579 Latency(us) 00:15:25.579 Device Information : IOPS MiB/s Average min max 00:15:25.579 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30874.46 120.60 4145.12 1220.00 7674.66 00:15:25.579 ======================================================== 00:15:25.579 Total : 30874.46 120.60 4145.12 1220.00 7674.66 00:15:25.579 00:15:25.579 [2024-11-19 10:43:12.730656] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:25.579 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:25.579 [2024-11-19 10:43:12.950480] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:30.842 [2024-11-19 10:43:18.101457] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:30.842 Initializing NVMe Controllers 00:15:30.842 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:30.842 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:30.842 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:30.842 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:30.842 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:30.842 Initialization complete. Launching workers. 00:15:30.842 Starting thread on core 2 00:15:30.842 Starting thread on core 3 00:15:30.842 Starting thread on core 1 00:15:30.842 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:30.842 [2024-11-19 10:43:18.421617] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:34.122 [2024-11-19 10:43:21.484176] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:34.122 Initializing NVMe Controllers 00:15:34.122 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:34.122 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:34.122 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:34.122 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:34.122 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:34.122 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:34.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:34.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:34.122 Initialization complete. Launching workers. 00:15:34.122 Starting thread on core 1 with urgent priority queue 00:15:34.122 Starting thread on core 2 with urgent priority queue 00:15:34.122 Starting thread on core 3 with urgent priority queue 00:15:34.122 Starting thread on core 0 with urgent priority queue 00:15:34.122 SPDK bdev Controller (SPDK2 ) core 0: 4931.67 IO/s 20.28 secs/100000 ios 00:15:34.122 SPDK bdev Controller (SPDK2 ) core 1: 4832.33 IO/s 20.69 secs/100000 ios 00:15:34.122 SPDK bdev Controller (SPDK2 ) core 2: 5173.33 IO/s 19.33 secs/100000 ios 00:15:34.122 SPDK bdev Controller (SPDK2 ) core 3: 5392.00 IO/s 18.55 secs/100000 ios 00:15:34.122 ======================================================== 00:15:34.122 00:15:34.122 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:34.380 [2024-11-19 10:43:21.809836] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:34.380 Initializing NVMe Controllers 00:15:34.380 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:34.380 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:34.380 Namespace ID: 1 size: 0GB 00:15:34.380 Initialization complete. 00:15:34.380 INFO: using host memory buffer for IO 00:15:34.380 Hello world! 00:15:34.380 [2024-11-19 10:43:21.819894] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:34.380 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:34.638 [2024-11-19 10:43:22.124044] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:36.014 Initializing NVMe Controllers 00:15:36.014 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:36.014 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:36.014 Initialization complete. Launching workers. 00:15:36.014 submit (in ns) avg, min, max = 7624.6, 3566.7, 4017660.0 00:15:36.014 complete (in ns) avg, min, max = 24932.0, 2063.3, 4016351.1 00:15:36.014 00:15:36.014 Submit histogram 00:15:36.014 ================ 00:15:36.014 Range in us Cumulative Count 00:15:36.014 3.556 - 3.579: 0.2070% ( 27) 00:15:36.014 3.579 - 3.603: 2.5759% ( 309) 00:15:36.014 3.603 - 3.627: 8.5250% ( 776) 00:15:36.014 3.627 - 3.650: 19.1966% ( 1392) 00:15:36.014 3.650 - 3.674: 29.1552% ( 1299) 00:15:36.014 3.674 - 3.698: 38.4468% ( 1212) 00:15:36.014 3.698 - 3.721: 46.7035% ( 1077) 00:15:36.014 3.721 - 3.745: 53.2812% ( 858) 00:15:36.014 3.745 - 3.769: 58.6170% ( 696) 00:15:36.014 3.769 - 3.793: 63.6078% ( 651) 00:15:36.014 3.793 - 3.816: 67.2570% ( 476) 00:15:36.014 3.816 - 3.840: 70.7068% ( 450) 00:15:36.014 3.840 - 3.864: 74.0954% ( 442) 00:15:36.014 3.864 - 3.887: 77.8289% ( 487) 00:15:36.014 3.887 - 3.911: 81.7234% ( 508) 00:15:36.014 3.911 - 3.935: 85.0199% ( 430) 00:15:36.014 3.935 - 3.959: 87.3735% ( 307) 00:15:36.014 3.959 - 3.982: 89.4588% ( 272) 00:15:36.014 3.982 - 4.006: 91.4290% ( 257) 00:15:36.014 4.006 - 4.030: 92.6786% ( 163) 00:15:36.014 4.030 - 4.053: 93.8132% ( 148) 00:15:36.014 4.053 - 4.077: 94.7102% ( 117) 00:15:36.014 4.077 - 4.101: 95.3849% ( 88) 00:15:36.014 4.101 - 4.124: 95.9138% ( 69) 00:15:36.014 4.124 - 4.148: 96.3048% ( 51) 00:15:36.014 4.148 - 4.172: 96.5271% ( 29) 00:15:36.014 4.172 - 4.196: 96.6881% ( 21) 00:15:36.014 4.196 - 4.219: 96.8491% ( 21) 00:15:36.014 4.219 - 4.243: 96.9718% ( 16) 00:15:36.014 4.243 - 4.267: 97.0791% ( 14) 00:15:36.014 4.267 - 4.290: 97.1634% ( 11) 00:15:36.014 4.290 - 4.314: 97.2478% ( 11) 00:15:36.015 4.314 - 4.338: 97.3014% ( 7) 00:15:36.015 4.338 - 4.361: 97.3934% ( 12) 00:15:36.015 4.361 - 4.385: 97.4318% ( 5) 00:15:36.015 4.385 - 4.409: 97.4548% ( 3) 00:15:36.015 4.409 - 4.433: 97.4854% ( 4) 00:15:36.015 4.433 - 4.456: 97.5084% ( 3) 00:15:36.015 4.456 - 4.480: 97.5238% ( 2) 00:15:36.015 4.480 - 4.504: 97.5698% ( 6) 00:15:36.015 4.527 - 4.551: 97.5774% ( 1) 00:15:36.015 4.551 - 4.575: 97.5928% ( 2) 00:15:36.015 4.575 - 4.599: 97.6004% ( 1) 00:15:36.015 4.599 - 4.622: 97.6158% ( 2) 00:15:36.015 4.622 - 4.646: 97.6541% ( 5) 00:15:36.015 4.670 - 4.693: 97.7001% ( 6) 00:15:36.015 4.693 - 4.717: 97.7461% ( 6) 00:15:36.015 4.717 - 4.741: 97.7844% ( 5) 00:15:36.015 4.741 - 4.764: 97.8304% ( 6) 00:15:36.015 4.764 - 4.788: 97.8458% ( 2) 00:15:36.015 4.788 - 4.812: 97.8764% ( 4) 00:15:36.015 4.812 - 4.836: 97.9148% ( 5) 00:15:36.015 4.836 - 4.859: 97.9837% ( 9) 00:15:36.015 4.859 - 4.883: 98.0067% ( 3) 00:15:36.015 4.883 - 4.907: 98.0297% ( 3) 00:15:36.015 4.907 - 4.930: 98.0527% ( 3) 00:15:36.015 4.930 - 4.954: 98.0681% ( 2) 00:15:36.015 4.954 - 4.978: 98.1141% ( 6) 00:15:36.015 4.978 - 5.001: 98.1601% ( 6) 00:15:36.015 5.001 - 5.025: 98.1984% ( 5) 00:15:36.015 5.025 - 5.049: 98.2137% ( 2) 00:15:36.015 5.049 - 5.073: 98.2291% ( 2) 00:15:36.015 5.073 - 5.096: 98.2521% ( 3) 00:15:36.015 5.096 - 5.120: 98.2981% ( 6) 00:15:36.015 5.120 - 5.144: 98.3134% ( 2) 00:15:36.015 5.167 - 5.191: 98.3364% ( 3) 00:15:36.015 5.191 - 5.215: 98.3441% ( 1) 00:15:36.015 5.215 - 5.239: 98.3517% ( 1) 00:15:36.015 5.239 - 5.262: 98.3594% ( 1) 00:15:36.015 5.262 - 5.286: 98.3747% ( 2) 00:15:36.015 5.286 - 5.310: 98.3824% ( 1) 00:15:36.015 5.310 - 5.333: 98.3977% ( 2) 00:15:36.015 5.357 - 5.381: 98.4054% ( 1) 00:15:36.015 5.381 - 5.404: 98.4131% ( 1) 00:15:36.015 5.476 - 5.499: 98.4207% ( 1) 00:15:36.015 5.570 - 5.594: 98.4284% ( 1) 00:15:36.015 5.594 - 5.618: 98.4361% ( 1) 00:15:36.015 5.665 - 5.689: 98.4437% ( 1) 00:15:36.015 5.736 - 5.760: 98.4514% ( 1) 00:15:36.015 5.807 - 5.831: 98.4591% ( 1) 00:15:36.015 5.902 - 5.926: 98.4667% ( 1) 00:15:36.015 6.068 - 6.116: 98.4744% ( 1) 00:15:36.015 6.163 - 6.210: 98.4821% ( 1) 00:15:36.015 6.305 - 6.353: 98.4974% ( 2) 00:15:36.015 6.353 - 6.400: 98.5051% ( 1) 00:15:36.015 6.495 - 6.542: 98.5127% ( 1) 00:15:36.015 6.542 - 6.590: 98.5204% ( 1) 00:15:36.015 6.827 - 6.874: 98.5281% ( 1) 00:15:36.015 7.016 - 7.064: 98.5357% ( 1) 00:15:36.015 7.064 - 7.111: 98.5664% ( 4) 00:15:36.015 7.111 - 7.159: 98.5741% ( 1) 00:15:36.015 7.159 - 7.206: 98.5817% ( 1) 00:15:36.015 7.253 - 7.301: 98.5971% ( 2) 00:15:36.015 7.301 - 7.348: 98.6124% ( 2) 00:15:36.015 7.348 - 7.396: 98.6277% ( 2) 00:15:36.015 7.490 - 7.538: 98.6354% ( 1) 00:15:36.015 7.585 - 7.633: 98.6431% ( 1) 00:15:36.015 7.727 - 7.775: 98.6584% ( 2) 00:15:36.015 7.822 - 7.870: 98.6737% ( 2) 00:15:36.015 7.917 - 7.964: 98.6814% ( 1) 00:15:36.015 8.107 - 8.154: 98.6891% ( 1) 00:15:36.015 8.154 - 8.201: 98.7044% ( 2) 00:15:36.015 8.201 - 8.249: 98.7197% ( 2) 00:15:36.015 8.296 - 8.344: 98.7351% ( 2) 00:15:36.015 8.344 - 8.391: 98.7427% ( 1) 00:15:36.015 8.486 - 8.533: 98.7580% ( 2) 00:15:36.015 8.533 - 8.581: 98.7657% ( 1) 00:15:36.015 8.581 - 8.628: 98.7734% ( 1) 00:15:36.015 8.628 - 8.676: 98.7810% ( 1) 00:15:36.015 8.770 - 8.818: 98.7887% ( 1) 00:15:36.015 8.818 - 8.865: 98.7964% ( 1) 00:15:36.015 9.007 - 9.055: 98.8040% ( 1) 00:15:36.015 9.055 - 9.102: 98.8194% ( 2) 00:15:36.015 9.434 - 9.481: 98.8347% ( 2) 00:15:36.015 9.576 - 9.624: 98.8424% ( 1) 00:15:36.015 9.813 - 9.861: 98.8500% ( 1) 00:15:36.015 9.908 - 9.956: 98.8654% ( 2) 00:15:36.015 9.956 - 10.003: 98.8730% ( 1) 00:15:36.015 10.572 - 10.619: 98.8807% ( 1) 00:15:36.015 10.667 - 10.714: 98.8884% ( 1) 00:15:36.015 10.904 - 10.951: 98.9037% ( 2) 00:15:36.015 10.999 - 11.046: 98.9114% ( 1) 00:15:36.015 11.093 - 11.141: 98.9190% ( 1) 00:15:36.015 11.141 - 11.188: 98.9267% ( 1) 00:15:36.015 11.520 - 11.567: 98.9344% ( 1) 00:15:36.015 11.615 - 11.662: 98.9420% ( 1) 00:15:36.015 11.804 - 11.852: 98.9497% ( 1) 00:15:36.015 12.041 - 12.089: 98.9574% ( 1) 00:15:36.015 12.089 - 12.136: 98.9650% ( 1) 00:15:36.015 12.231 - 12.326: 98.9727% ( 1) 00:15:36.015 12.800 - 12.895: 98.9804% ( 1) 00:15:36.015 13.369 - 13.464: 98.9880% ( 1) 00:15:36.015 13.464 - 13.559: 99.0034% ( 2) 00:15:36.015 13.843 - 13.938: 99.0110% ( 1) 00:15:36.015 14.033 - 14.127: 99.0187% ( 1) 00:15:36.015 14.507 - 14.601: 99.0340% ( 2) 00:15:36.015 14.696 - 14.791: 99.0417% ( 1) 00:15:36.015 14.791 - 14.886: 99.0494% ( 1) 00:15:36.015 15.360 - 15.455: 99.0570% ( 1) 00:15:36.015 17.161 - 17.256: 99.0647% ( 1) 00:15:36.015 17.256 - 17.351: 99.0800% ( 2) 00:15:36.015 17.351 - 17.446: 99.0954% ( 2) 00:15:36.015 17.541 - 17.636: 99.1337% ( 5) 00:15:36.015 17.636 - 17.730: 99.2027% ( 9) 00:15:36.015 17.730 - 17.825: 99.2564% ( 7) 00:15:36.015 17.825 - 17.920: 99.3407% ( 11) 00:15:36.015 17.920 - 18.015: 99.3714% ( 4) 00:15:36.016 18.015 - 18.110: 99.4174% ( 6) 00:15:36.016 18.110 - 18.204: 99.4327% ( 2) 00:15:36.016 18.204 - 18.299: 99.4940% ( 8) 00:15:36.016 18.394 - 18.489: 99.5860% ( 12) 00:15:36.016 18.489 - 18.584: 99.6473% ( 8) 00:15:36.016 18.584 - 18.679: 99.6627% ( 2) 00:15:36.016 18.679 - 18.773: 99.7010% ( 5) 00:15:36.016 18.773 - 18.868: 99.7087% ( 1) 00:15:36.016 18.868 - 18.963: 99.7547% ( 6) 00:15:36.016 18.963 - 19.058: 99.7853% ( 4) 00:15:36.016 19.058 - 19.153: 99.8007% ( 2) 00:15:36.016 19.153 - 19.247: 99.8083% ( 1) 00:15:36.016 19.247 - 19.342: 99.8313% ( 3) 00:15:36.016 19.532 - 19.627: 99.8467% ( 2) 00:15:36.016 19.721 - 19.816: 99.8543% ( 1) 00:15:36.016 20.101 - 20.196: 99.8620% ( 1) 00:15:36.016 21.713 - 21.807: 99.8697% ( 1) 00:15:36.016 21.902 - 21.997: 99.8773% ( 1) 00:15:36.016 22.471 - 22.566: 99.8850% ( 1) 00:15:36.016 23.988 - 24.083: 99.8927% ( 1) 00:15:36.016 25.221 - 25.410: 99.9003% ( 1) 00:15:36.016 27.876 - 28.065: 99.9080% ( 1) 00:15:36.016 3980.705 - 4004.978: 99.9463% ( 5) 00:15:36.016 4004.978 - 4029.250: 100.0000% ( 7) 00:15:36.016 00:15:36.016 Complete histogram 00:15:36.016 ================== 00:15:36.016 Range in us Cumulative Count 00:15:36.016 2.062 - 2.074: 8.3103% ( 1084) 00:15:36.016 2.074 - 2.086: 46.2435% ( 4948) 00:15:36.016 2.086 - 2.098: 49.9693% ( 486) 00:15:36.016 2.098 - 2.110: 56.3094% ( 827) 00:15:36.016 2.110 - 2.121: 62.8335% ( 851) 00:15:36.016 2.121 - 2.133: 64.1751% ( 175) 00:15:36.016 2.133 - 2.145: 73.6737% ( 1239) 00:15:36.016 2.145 - 2.157: 82.8350% ( 1195) 00:15:36.016 2.157 - 2.169: 83.9696% ( 148) 00:15:36.016 2.169 - 2.181: 86.7372% ( 361) 00:15:36.016 2.181 - 2.193: 88.4698% ( 226) 00:15:36.016 2.193 - 2.204: 89.0448% ( 75) 00:15:36.016 2.204 - 2.216: 90.6087% ( 204) 00:15:36.016 2.216 - 2.228: 91.6130% ( 131) 00:15:36.016 2.228 - 2.240: 92.4716% ( 112) 00:15:36.016 2.240 - 2.252: 94.1506% ( 219) 00:15:36.016 2.252 - 2.264: 94.9785% ( 108) 00:15:36.016 2.264 - 2.276: 95.1165% ( 18) 00:15:36.016 2.276 - 2.287: 95.2622% ( 19) 00:15:36.016 2.287 - 2.299: 95.3849% ( 16) 00:15:36.016 2.299 - 2.311: 95.5458% ( 21) 00:15:36.016 2.311 - 2.323: 95.8678% ( 42) 00:15:36.016 2.323 - 2.335: 96.0595% ( 25) 00:15:36.016 2.335 - 2.347: 96.0748% ( 2) 00:15:36.016 2.347 - 2.359: 96.1132% ( 5) 00:15:36.016 2.359 - 2.370: 96.2358% ( 16) 00:15:36.016 2.370 - 2.382: 96.3968% ( 21) 00:15:36.016 2.382 - 2.394: 96.6958% ( 39) 00:15:36.016 2.394 - 2.406: 97.1174% ( 55) 00:15:36.016 2.406 - 2.418: 97.3934% ( 36) 00:15:36.016 2.418 - 2.430: 97.7001% ( 40) 00:15:36.016 2.430 - 2.441: 97.8304% ( 17) 00:15:36.016 2.441 - 2.453: 97.9224% ( 12) 00:15:36.016 2.453 - 2.465: 98.0757% ( 20) 00:15:36.016 2.465 - 2.477: 98.1754% ( 13) 00:15:36.016 2.477 - 2.489: 98.2291% ( 7) 00:15:36.016 2.489 - 2.501: 98.2751% ( 6) 00:15:36.016 2.501 - 2.513: 98.3057% ( 4) 00:15:36.016 2.513 - 2.524: 98.3211% ( 2) 00:15:36.016 2.524 - 2.536: 98.3287% ( 1) 00:15:36.016 2.536 - 2.548: 98.3441% ( 2) 00:15:36.016 2.548 - 2.560: 98.3517% ( 1) 00:15:36.016 2.560 - 2.572: 98.3594% ( 1) 00:15:36.016 2.572 - 2.584: 98.3671% ( 1) 00:15:36.016 2.584 - 2.596: 98.3747% ( 1) 00:15:36.016 2.596 - 2.607: 98.3901% ( 2) 00:15:36.016 2.619 - 2.631: 98.3977% ( 1) 00:15:36.016 2.631 - 2.643: 98.4054% ( 1) 00:15:36.016 2.643 - 2.655: 98.4284% ( 3) 00:15:36.016 2.667 - 2.679: 98.4361% ( 1) 00:15:36.016 2.679 - 2.690: 98.4437% ( 1) 00:15:36.016 2.726 - 2.738: 98.4514% ( 1) 00:15:36.016 2.738 - 2.750: 98.4591% ( 1) 00:15:36.016 2.750 - 2.761: 98.4667% ( 1) 00:15:36.016 2.785 - 2.797: 98.4744% ( 1) 00:15:36.016 2.809 - 2.821: 98.4821% ( 1) 00:15:36.016 2.856 - 2.868: 98.4897% ( 1) 00:15:36.016 2.868 - 2.880: 98.5051% ( 2) 00:15:36.016 2.892 - 2.904: 98.5127% ( 1) 00:15:36.016 2.951 - 2.963: 98.5281% ( 2) 00:15:36.016 2.987 - 2.999: 98.5357% ( 1) 00:15:36.016 2.999 - 3.010: 98.5434% ( 1) 00:15:36.016 3.390 - 3.413: 98.5511% ( 1) 00:15:36.016 3.413 - 3.437: 98.5587% ( 1) 00:15:36.016 3.437 - 3.461: 98.5741% ( 2) 00:15:36.016 3.461 - 3.484: 98.5817% ( 1) 00:15:36.016 3.508 - 3.532: 98.5894% ( 1) 00:15:36.016 3.532 - 3.556: 98.5971% ( 1) 00:15:36.016 3.556 - 3.579: 98.6047% ( 1) 00:15:36.016 3.579 - 3.603: 98.6201% ( 2) 00:15:36.016 3.603 - 3.627: 98.6277% ( 1) 00:15:36.016 3.627 - 3.650: 98.6354% ( 1) 00:15:36.016 3.650 - 3.674: 98.6507% ( 2) 00:15:36.016 3.674 - 3.698: 98.6584% ( 1) 00:15:36.016 3.745 - 3.769: 98.6661% ( 1) 00:15:36.016 3.769 - 3.793: 98.6814% ( 2) 00:15:36.016 3.793 - 3.816: 98.6967% ( 2) 00:15:36.016 3.840 - 3.864: 98.7121% ( 2) 00:15:36.016 3.887 - 3.911: 98.7197% ( 1) 00:15:36.016 3.911 - 3.935: 98.7274% ( 1) 00:15:36.016 3.935 - 3.959: 9[2024-11-19 10:43:23.219011] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:36.016 8.7351% ( 1) 00:15:36.016 3.982 - 4.006: 98.7427% ( 1) 00:15:36.016 4.030 - 4.053: 98.7504% ( 1) 00:15:36.016 4.101 - 4.124: 98.7580% ( 1) 00:15:36.016 4.290 - 4.314: 98.7657% ( 1) 00:15:36.016 5.001 - 5.025: 98.7734% ( 1) 00:15:36.016 5.073 - 5.096: 98.7810% ( 1) 00:15:36.016 5.618 - 5.641: 98.7887% ( 1) 00:15:36.016 5.855 - 5.879: 98.7964% ( 1) 00:15:36.016 5.926 - 5.950: 98.8040% ( 1) 00:15:36.016 5.997 - 6.021: 98.8117% ( 1) 00:15:36.016 6.163 - 6.210: 98.8194% ( 1) 00:15:36.016 6.353 - 6.400: 98.8270% ( 1) 00:15:36.016 6.447 - 6.495: 98.8347% ( 1) 00:15:36.016 6.495 - 6.542: 98.8424% ( 1) 00:15:36.016 6.779 - 6.827: 98.8500% ( 1) 00:15:36.016 6.874 - 6.921: 98.8577% ( 1) 00:15:36.016 7.775 - 7.822: 98.8654% ( 1) 00:15:36.016 8.486 - 8.533: 98.8730% ( 1) 00:15:36.016 15.550 - 15.644: 98.8807% ( 1) 00:15:36.016 15.644 - 15.739: 98.8884% ( 1) 00:15:36.016 15.739 - 15.834: 98.9114% ( 3) 00:15:36.016 15.834 - 15.929: 98.9497% ( 5) 00:15:36.017 15.929 - 16.024: 98.9804% ( 4) 00:15:36.017 16.024 - 16.119: 99.0110% ( 4) 00:15:36.017 16.119 - 16.213: 99.0647% ( 7) 00:15:36.017 16.213 - 16.308: 99.0954% ( 4) 00:15:36.017 16.308 - 16.403: 99.1414% ( 6) 00:15:36.017 16.403 - 16.498: 99.1490% ( 1) 00:15:36.017 16.498 - 16.593: 99.1567% ( 1) 00:15:36.017 16.593 - 16.687: 99.1720% ( 2) 00:15:36.017 16.687 - 16.782: 99.2027% ( 4) 00:15:36.017 16.782 - 16.877: 99.2410% ( 5) 00:15:36.017 16.877 - 16.972: 99.2947% ( 7) 00:15:36.017 16.972 - 17.067: 99.3024% ( 1) 00:15:36.017 17.067 - 17.161: 99.3100% ( 1) 00:15:36.017 17.256 - 17.351: 99.3254% ( 2) 00:15:36.017 17.351 - 17.446: 99.3330% ( 1) 00:15:36.017 17.446 - 17.541: 99.3560% ( 3) 00:15:36.017 17.541 - 17.636: 99.3637% ( 1) 00:15:36.017 17.730 - 17.825: 99.3714% ( 1) 00:15:36.017 17.825 - 17.920: 99.3944% ( 3) 00:15:36.017 18.110 - 18.204: 99.4097% ( 2) 00:15:36.017 18.299 - 18.394: 99.4174% ( 1) 00:15:36.017 25.790 - 25.979: 99.4250% ( 1) 00:15:36.017 31.289 - 31.479: 99.4327% ( 1) 00:15:36.017 3980.705 - 4004.978: 99.7393% ( 40) 00:15:36.017 4004.978 - 4029.250: 100.0000% ( 34) 00:15:36.017 00:15:36.017 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:36.017 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:36.017 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:36.017 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:36.017 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:36.017 [ 00:15:36.017 { 00:15:36.017 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:36.017 "subtype": "Discovery", 00:15:36.017 "listen_addresses": [], 00:15:36.017 "allow_any_host": true, 00:15:36.017 "hosts": [] 00:15:36.017 }, 00:15:36.017 { 00:15:36.017 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:36.017 "subtype": "NVMe", 00:15:36.017 "listen_addresses": [ 00:15:36.017 { 00:15:36.017 "trtype": "VFIOUSER", 00:15:36.017 "adrfam": "IPv4", 00:15:36.017 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:36.017 "trsvcid": "0" 00:15:36.017 } 00:15:36.017 ], 00:15:36.017 "allow_any_host": true, 00:15:36.017 "hosts": [], 00:15:36.017 "serial_number": "SPDK1", 00:15:36.017 "model_number": "SPDK bdev Controller", 00:15:36.017 "max_namespaces": 32, 00:15:36.017 "min_cntlid": 1, 00:15:36.017 "max_cntlid": 65519, 00:15:36.017 "namespaces": [ 00:15:36.017 { 00:15:36.017 "nsid": 1, 00:15:36.017 "bdev_name": "Malloc1", 00:15:36.017 "name": "Malloc1", 00:15:36.017 "nguid": "D1FF4F8E84F74B039744EACC006CC5C2", 00:15:36.017 "uuid": "d1ff4f8e-84f7-4b03-9744-eacc006cc5c2" 00:15:36.017 }, 00:15:36.017 { 00:15:36.017 "nsid": 2, 00:15:36.017 "bdev_name": "Malloc3", 00:15:36.017 "name": "Malloc3", 00:15:36.017 "nguid": "96C15DF39ACB46D19A23E4AB743EF44E", 00:15:36.017 "uuid": "96c15df3-9acb-46d1-9a23-e4ab743ef44e" 00:15:36.017 } 00:15:36.017 ] 00:15:36.017 }, 00:15:36.017 { 00:15:36.017 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:36.017 "subtype": "NVMe", 00:15:36.017 "listen_addresses": [ 00:15:36.017 { 00:15:36.017 "trtype": "VFIOUSER", 00:15:36.017 "adrfam": "IPv4", 00:15:36.017 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:36.017 "trsvcid": "0" 00:15:36.017 } 00:15:36.017 ], 00:15:36.017 "allow_any_host": true, 00:15:36.017 "hosts": [], 00:15:36.017 "serial_number": "SPDK2", 00:15:36.017 "model_number": "SPDK bdev Controller", 00:15:36.017 "max_namespaces": 32, 00:15:36.017 "min_cntlid": 1, 00:15:36.017 "max_cntlid": 65519, 00:15:36.017 "namespaces": [ 00:15:36.017 { 00:15:36.017 "nsid": 1, 00:15:36.017 "bdev_name": "Malloc2", 00:15:36.017 "name": "Malloc2", 00:15:36.017 "nguid": "B030DCFCA99E423A9EBAD2FD904608CE", 00:15:36.017 "uuid": "b030dcfc-a99e-423a-9eba-d2fd904608ce" 00:15:36.017 } 00:15:36.017 ] 00:15:36.017 } 00:15:36.017 ] 00:15:36.017 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:36.017 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1323567 00:15:36.017 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:36.017 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:36.017 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:36.017 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:36.017 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:36.017 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:36.017 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:36.017 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:36.275 [2024-11-19 10:43:23.767815] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:36.275 Malloc4 00:15:36.532 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:36.790 [2024-11-19 10:43:24.175972] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:36.790 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:36.790 Asynchronous Event Request test 00:15:36.790 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:36.790 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:36.790 Registering asynchronous event callbacks... 00:15:36.790 Starting namespace attribute notice tests for all controllers... 00:15:36.790 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:36.790 aer_cb - Changed Namespace 00:15:36.790 Cleaning up... 00:15:37.048 [ 00:15:37.048 { 00:15:37.048 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:37.048 "subtype": "Discovery", 00:15:37.048 "listen_addresses": [], 00:15:37.048 "allow_any_host": true, 00:15:37.048 "hosts": [] 00:15:37.048 }, 00:15:37.048 { 00:15:37.048 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:37.048 "subtype": "NVMe", 00:15:37.048 "listen_addresses": [ 00:15:37.048 { 00:15:37.048 "trtype": "VFIOUSER", 00:15:37.048 "adrfam": "IPv4", 00:15:37.048 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:37.048 "trsvcid": "0" 00:15:37.048 } 00:15:37.048 ], 00:15:37.048 "allow_any_host": true, 00:15:37.048 "hosts": [], 00:15:37.048 "serial_number": "SPDK1", 00:15:37.048 "model_number": "SPDK bdev Controller", 00:15:37.048 "max_namespaces": 32, 00:15:37.048 "min_cntlid": 1, 00:15:37.048 "max_cntlid": 65519, 00:15:37.048 "namespaces": [ 00:15:37.048 { 00:15:37.048 "nsid": 1, 00:15:37.048 "bdev_name": "Malloc1", 00:15:37.048 "name": "Malloc1", 00:15:37.048 "nguid": "D1FF4F8E84F74B039744EACC006CC5C2", 00:15:37.048 "uuid": "d1ff4f8e-84f7-4b03-9744-eacc006cc5c2" 00:15:37.048 }, 00:15:37.048 { 00:15:37.048 "nsid": 2, 00:15:37.048 "bdev_name": "Malloc3", 00:15:37.048 "name": "Malloc3", 00:15:37.048 "nguid": "96C15DF39ACB46D19A23E4AB743EF44E", 00:15:37.048 "uuid": "96c15df3-9acb-46d1-9a23-e4ab743ef44e" 00:15:37.048 } 00:15:37.048 ] 00:15:37.048 }, 00:15:37.048 { 00:15:37.048 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:37.048 "subtype": "NVMe", 00:15:37.048 "listen_addresses": [ 00:15:37.048 { 00:15:37.048 "trtype": "VFIOUSER", 00:15:37.048 "adrfam": "IPv4", 00:15:37.048 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:37.048 "trsvcid": "0" 00:15:37.048 } 00:15:37.048 ], 00:15:37.048 "allow_any_host": true, 00:15:37.048 "hosts": [], 00:15:37.048 "serial_number": "SPDK2", 00:15:37.048 "model_number": "SPDK bdev Controller", 00:15:37.048 "max_namespaces": 32, 00:15:37.048 "min_cntlid": 1, 00:15:37.048 "max_cntlid": 65519, 00:15:37.048 "namespaces": [ 00:15:37.048 { 00:15:37.048 "nsid": 1, 00:15:37.048 "bdev_name": "Malloc2", 00:15:37.048 "name": "Malloc2", 00:15:37.048 "nguid": "B030DCFCA99E423A9EBAD2FD904608CE", 00:15:37.048 "uuid": "b030dcfc-a99e-423a-9eba-d2fd904608ce" 00:15:37.048 }, 00:15:37.048 { 00:15:37.048 "nsid": 2, 00:15:37.048 "bdev_name": "Malloc4", 00:15:37.048 "name": "Malloc4", 00:15:37.048 "nguid": "F892C4F3DA074FC7971F16772045370F", 00:15:37.048 "uuid": "f892c4f3-da07-4fc7-971f-16772045370f" 00:15:37.048 } 00:15:37.048 ] 00:15:37.048 } 00:15:37.048 ] 00:15:37.048 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1323567 00:15:37.048 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:37.048 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1317959 00:15:37.049 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1317959 ']' 00:15:37.049 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1317959 00:15:37.049 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:37.049 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:37.049 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1317959 00:15:37.049 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:37.049 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:37.049 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1317959' 00:15:37.049 killing process with pid 1317959 00:15:37.049 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1317959 00:15:37.049 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1317959 00:15:37.308 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:37.308 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:37.308 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:37.308 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:37.308 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:37.308 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1323709 00:15:37.308 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:37.308 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1323709' 00:15:37.308 Process pid: 1323709 00:15:37.308 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:37.308 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1323709 00:15:37.308 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1323709 ']' 00:15:37.308 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.308 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.308 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.308 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.308 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:37.308 [2024-11-19 10:43:24.896006] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:37.308 [2024-11-19 10:43:24.897058] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:15:37.308 [2024-11-19 10:43:24.897132] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.567 [2024-11-19 10:43:24.963916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:37.567 [2024-11-19 10:43:25.017350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.567 [2024-11-19 10:43:25.017404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.567 [2024-11-19 10:43:25.017432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.567 [2024-11-19 10:43:25.017443] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.567 [2024-11-19 10:43:25.017453] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.567 [2024-11-19 10:43:25.018847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.567 [2024-11-19 10:43:25.018911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.567 [2024-11-19 10:43:25.018975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:37.567 [2024-11-19 10:43:25.018978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.567 [2024-11-19 10:43:25.109347] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:37.567 [2024-11-19 10:43:25.109616] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:37.568 [2024-11-19 10:43:25.109860] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:37.568 [2024-11-19 10:43:25.110530] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:37.568 [2024-11-19 10:43:25.110770] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:37.568 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.568 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:37.568 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:38.943 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:38.943 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:38.943 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:38.943 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:38.943 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:38.943 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:39.202 Malloc1 00:15:39.202 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:39.461 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:39.719 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:39.978 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:39.978 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:39.978 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:40.543 Malloc2 00:15:40.543 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:40.543 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:40.801 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:41.365 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:41.365 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1323709 00:15:41.365 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1323709 ']' 00:15:41.365 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1323709 00:15:41.365 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:41.365 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.365 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1323709 00:15:41.365 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:41.365 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:41.365 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1323709' 00:15:41.365 killing process with pid 1323709 00:15:41.365 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1323709 00:15:41.365 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1323709 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:41.624 00:15:41.624 real 0m53.575s 00:15:41.624 user 3m27.234s 00:15:41.624 sys 0m3.945s 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:41.624 ************************************ 00:15:41.624 END TEST nvmf_vfio_user 00:15:41.624 ************************************ 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:41.624 ************************************ 00:15:41.624 START TEST nvmf_vfio_user_nvme_compliance 00:15:41.624 ************************************ 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:41.624 * Looking for test storage... 00:15:41.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:41.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.624 --rc genhtml_branch_coverage=1 00:15:41.624 --rc genhtml_function_coverage=1 00:15:41.624 --rc genhtml_legend=1 00:15:41.624 --rc geninfo_all_blocks=1 00:15:41.624 --rc geninfo_unexecuted_blocks=1 00:15:41.624 00:15:41.624 ' 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:41.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.624 --rc genhtml_branch_coverage=1 00:15:41.624 --rc genhtml_function_coverage=1 00:15:41.624 --rc genhtml_legend=1 00:15:41.624 --rc geninfo_all_blocks=1 00:15:41.624 --rc geninfo_unexecuted_blocks=1 00:15:41.624 00:15:41.624 ' 00:15:41.624 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:41.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.624 --rc genhtml_branch_coverage=1 00:15:41.624 --rc genhtml_function_coverage=1 00:15:41.625 --rc genhtml_legend=1 00:15:41.625 --rc geninfo_all_blocks=1 00:15:41.625 --rc geninfo_unexecuted_blocks=1 00:15:41.625 00:15:41.625 ' 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:41.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.625 --rc genhtml_branch_coverage=1 00:15:41.625 --rc genhtml_function_coverage=1 00:15:41.625 --rc genhtml_legend=1 00:15:41.625 --rc geninfo_all_blocks=1 00:15:41.625 --rc geninfo_unexecuted_blocks=1 00:15:41.625 00:15:41.625 ' 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:41.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1324316 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1324316' 00:15:41.625 Process pid: 1324316 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1324316 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1324316 ']' 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.625 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:41.884 [2024-11-19 10:43:29.278451] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:15:41.884 [2024-11-19 10:43:29.278540] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.884 [2024-11-19 10:43:29.344500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:41.884 [2024-11-19 10:43:29.405006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:41.884 [2024-11-19 10:43:29.405056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:41.884 [2024-11-19 10:43:29.405083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:41.884 [2024-11-19 10:43:29.405094] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:41.884 [2024-11-19 10:43:29.405104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:41.884 [2024-11-19 10:43:29.406571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.884 [2024-11-19 10:43:29.406598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:41.884 [2024-11-19 10:43:29.406602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.141 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.141 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:42.141 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:43.073 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:43.073 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:43.073 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:43.073 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.073 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:43.073 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.073 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:43.073 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:43.073 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.073 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:43.073 malloc0 00:15:43.073 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.073 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:43.073 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.073 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:43.073 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.073 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:43.073 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.073 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:43.073 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.073 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:43.073 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.073 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:43.074 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.074 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:43.331 00:15:43.331 00:15:43.331 CUnit - A unit testing framework for C - Version 2.1-3 00:15:43.331 http://cunit.sourceforge.net/ 00:15:43.331 00:15:43.331 00:15:43.331 Suite: nvme_compliance 00:15:43.331 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-19 10:43:30.793843] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.331 [2024-11-19 10:43:30.795374] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:43.331 [2024-11-19 10:43:30.795400] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:43.331 [2024-11-19 10:43:30.795413] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:43.331 [2024-11-19 10:43:30.796865] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.331 passed 00:15:43.331 Test: admin_identify_ctrlr_verify_fused ...[2024-11-19 10:43:30.881497] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.331 [2024-11-19 10:43:30.884524] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.331 passed 00:15:43.589 Test: admin_identify_ns ...[2024-11-19 10:43:30.971126] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.589 [2024-11-19 10:43:31.028348] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:43.589 [2024-11-19 10:43:31.038319] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:43.589 [2024-11-19 10:43:31.059436] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.589 passed 00:15:43.589 Test: admin_get_features_mandatory_features ...[2024-11-19 10:43:31.143127] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.589 [2024-11-19 10:43:31.146148] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.589 passed 00:15:43.846 Test: admin_get_features_optional_features ...[2024-11-19 10:43:31.230692] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.846 [2024-11-19 10:43:31.233714] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.846 passed 00:15:43.846 Test: admin_set_features_number_of_queues ...[2024-11-19 10:43:31.314803] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.846 [2024-11-19 10:43:31.419420] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.846 passed 00:15:44.135 Test: admin_get_log_page_mandatory_logs ...[2024-11-19 10:43:31.503472] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.135 [2024-11-19 10:43:31.506499] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.135 passed 00:15:44.135 Test: admin_get_log_page_with_lpo ...[2024-11-19 10:43:31.589619] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.135 [2024-11-19 10:43:31.657321] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:44.135 [2024-11-19 10:43:31.670411] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.135 passed 00:15:44.436 Test: fabric_property_get ...[2024-11-19 10:43:31.754522] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.436 [2024-11-19 10:43:31.755818] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:44.436 [2024-11-19 10:43:31.757540] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.436 passed 00:15:44.436 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-19 10:43:31.842083] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.436 [2024-11-19 10:43:31.843412] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:44.436 [2024-11-19 10:43:31.847113] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.436 passed 00:15:44.436 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-19 10:43:31.929851] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.436 [2024-11-19 10:43:32.014310] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:44.436 [2024-11-19 10:43:32.030328] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:44.436 [2024-11-19 10:43:32.035452] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.720 passed 00:15:44.720 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-19 10:43:32.119553] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.720 [2024-11-19 10:43:32.120899] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:44.720 [2024-11-19 10:43:32.122575] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.720 passed 00:15:44.720 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-19 10:43:32.206788] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.720 [2024-11-19 10:43:32.283326] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:44.720 [2024-11-19 10:43:32.307309] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:44.720 [2024-11-19 10:43:32.309436] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.977 passed 00:15:44.977 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-19 10:43:32.393688] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.977 [2024-11-19 10:43:32.395012] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:44.977 [2024-11-19 10:43:32.395067] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:44.977 [2024-11-19 10:43:32.396715] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.977 passed 00:15:44.977 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-19 10:43:32.480930] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.977 [2024-11-19 10:43:32.573319] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:44.977 [2024-11-19 10:43:32.581326] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:44.977 [2024-11-19 10:43:32.589317] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:44.977 [2024-11-19 10:43:32.597321] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:45.235 [2024-11-19 10:43:32.626426] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.235 passed 00:15:45.235 Test: admin_create_io_sq_verify_pc ...[2024-11-19 10:43:32.708914] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.235 [2024-11-19 10:43:32.725327] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:45.235 [2024-11-19 10:43:32.743297] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.235 passed 00:15:45.235 Test: admin_create_io_qp_max_qps ...[2024-11-19 10:43:32.823804] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.607 [2024-11-19 10:43:33.925321] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:46.864 [2024-11-19 10:43:34.304662] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.864 passed 00:15:46.864 Test: admin_create_io_sq_shared_cq ...[2024-11-19 10:43:34.386933] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.121 [2024-11-19 10:43:34.517313] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:47.121 [2024-11-19 10:43:34.554415] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.121 passed 00:15:47.121 00:15:47.121 Run Summary: Type Total Ran Passed Failed Inactive 00:15:47.121 suites 1 1 n/a 0 0 00:15:47.121 tests 18 18 18 0 0 00:15:47.121 asserts 360 360 360 0 n/a 00:15:47.121 00:15:47.121 Elapsed time = 1.558 seconds 00:15:47.121 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1324316 00:15:47.121 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1324316 ']' 00:15:47.121 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1324316 00:15:47.121 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:47.121 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.121 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1324316 00:15:47.121 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:47.122 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:47.122 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1324316' 00:15:47.122 killing process with pid 1324316 00:15:47.122 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1324316 00:15:47.122 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1324316 00:15:47.380 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:47.380 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:47.380 00:15:47.380 real 0m5.848s 00:15:47.380 user 0m16.370s 00:15:47.380 sys 0m0.579s 00:15:47.380 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.380 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:47.380 ************************************ 00:15:47.380 END TEST nvmf_vfio_user_nvme_compliance 00:15:47.380 ************************************ 00:15:47.380 10:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:47.380 10:43:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:47.380 10:43:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.380 10:43:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:47.380 ************************************ 00:15:47.380 START TEST nvmf_vfio_user_fuzz 00:15:47.380 ************************************ 00:15:47.380 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:47.638 * Looking for test storage... 00:15:47.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:47.638 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:47.638 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:47.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.639 --rc genhtml_branch_coverage=1 00:15:47.639 --rc genhtml_function_coverage=1 00:15:47.639 --rc genhtml_legend=1 00:15:47.639 --rc geninfo_all_blocks=1 00:15:47.639 --rc geninfo_unexecuted_blocks=1 00:15:47.639 00:15:47.639 ' 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:47.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.639 --rc genhtml_branch_coverage=1 00:15:47.639 --rc genhtml_function_coverage=1 00:15:47.639 --rc genhtml_legend=1 00:15:47.639 --rc geninfo_all_blocks=1 00:15:47.639 --rc geninfo_unexecuted_blocks=1 00:15:47.639 00:15:47.639 ' 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:47.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.639 --rc genhtml_branch_coverage=1 00:15:47.639 --rc genhtml_function_coverage=1 00:15:47.639 --rc genhtml_legend=1 00:15:47.639 --rc geninfo_all_blocks=1 00:15:47.639 --rc geninfo_unexecuted_blocks=1 00:15:47.639 00:15:47.639 ' 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:47.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.639 --rc genhtml_branch_coverage=1 00:15:47.639 --rc genhtml_function_coverage=1 00:15:47.639 --rc genhtml_legend=1 00:15:47.639 --rc geninfo_all_blocks=1 00:15:47.639 --rc geninfo_unexecuted_blocks=1 00:15:47.639 00:15:47.639 ' 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.639 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:47.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1325051 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1325051' 00:15:47.640 Process pid: 1325051 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1325051 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1325051 ']' 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.640 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:47.898 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:47.898 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:47.898 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:48.830 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:48.830 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.830 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:49.089 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.089 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:49.089 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:49.089 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.089 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:49.089 malloc0 00:15:49.089 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.089 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:49.089 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.089 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:49.089 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.089 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:49.089 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.089 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:49.089 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.089 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:49.089 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.089 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:49.089 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.089 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:49.089 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:21.151 Fuzzing completed. Shutting down the fuzz application 00:16:21.151 00:16:21.151 Dumping successful admin opcodes: 00:16:21.151 8, 9, 10, 24, 00:16:21.151 Dumping successful io opcodes: 00:16:21.151 0, 00:16:21.151 NS: 0x20000081ef00 I/O qp, Total commands completed: 716649, total successful commands: 2790, random_seed: 616510080 00:16:21.151 NS: 0x20000081ef00 admin qp, Total commands completed: 91480, total successful commands: 739, random_seed: 45879872 00:16:21.151 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:21.151 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.151 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:21.151 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.151 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1325051 00:16:21.151 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1325051 ']' 00:16:21.151 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1325051 00:16:21.151 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:21.151 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.151 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1325051 00:16:21.151 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:21.151 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:21.151 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1325051' 00:16:21.151 killing process with pid 1325051 00:16:21.151 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1325051 00:16:21.151 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1325051 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:21.151 00:16:21.151 real 0m32.254s 00:16:21.151 user 0m34.344s 00:16:21.151 sys 0m26.971s 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:21.151 ************************************ 00:16:21.151 END TEST nvmf_vfio_user_fuzz 00:16:21.151 ************************************ 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:21.151 ************************************ 00:16:21.151 START TEST nvmf_auth_target 00:16:21.151 ************************************ 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:21.151 * Looking for test storage... 00:16:21.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:21.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.151 --rc genhtml_branch_coverage=1 00:16:21.151 --rc genhtml_function_coverage=1 00:16:21.151 --rc genhtml_legend=1 00:16:21.151 --rc geninfo_all_blocks=1 00:16:21.151 --rc geninfo_unexecuted_blocks=1 00:16:21.151 00:16:21.151 ' 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:21.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.151 --rc genhtml_branch_coverage=1 00:16:21.151 --rc genhtml_function_coverage=1 00:16:21.151 --rc genhtml_legend=1 00:16:21.151 --rc geninfo_all_blocks=1 00:16:21.151 --rc geninfo_unexecuted_blocks=1 00:16:21.151 00:16:21.151 ' 00:16:21.151 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:21.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.151 --rc genhtml_branch_coverage=1 00:16:21.152 --rc genhtml_function_coverage=1 00:16:21.152 --rc genhtml_legend=1 00:16:21.152 --rc geninfo_all_blocks=1 00:16:21.152 --rc geninfo_unexecuted_blocks=1 00:16:21.152 00:16:21.152 ' 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:21.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.152 --rc genhtml_branch_coverage=1 00:16:21.152 --rc genhtml_function_coverage=1 00:16:21.152 --rc genhtml_legend=1 00:16:21.152 --rc geninfo_all_blocks=1 00:16:21.152 --rc geninfo_unexecuted_blocks=1 00:16:21.152 00:16:21.152 ' 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:21.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:21.152 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:22.088 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:22.088 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:22.088 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:22.089 Found net devices under 0000:09:00.0: cvl_0_0 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:22.089 Found net devices under 0000:09:00.1: cvl_0_1 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:22.089 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:22.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:16:22.348 00:16:22.348 --- 10.0.0.2 ping statistics --- 00:16:22.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.348 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:22.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:16:22.348 00:16:22.348 --- 10.0.0.1 ping statistics --- 00:16:22.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.348 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1330499 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1330499 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1330499 ']' 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.348 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1330528 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=279b70a40ce3a1f71ee43d9653d3d24e302fdb5074395d6f 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.vIK 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 279b70a40ce3a1f71ee43d9653d3d24e302fdb5074395d6f 0 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 279b70a40ce3a1f71ee43d9653d3d24e302fdb5074395d6f 0 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=279b70a40ce3a1f71ee43d9653d3d24e302fdb5074395d6f 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.vIK 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.vIK 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.vIK 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3e314d2aabe21f5dd1a35e98c1be86aa242469a7ba1e735e7ff6e5311094061a 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.uwl 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3e314d2aabe21f5dd1a35e98c1be86aa242469a7ba1e735e7ff6e5311094061a 3 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3e314d2aabe21f5dd1a35e98c1be86aa242469a7ba1e735e7ff6e5311094061a 3 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3e314d2aabe21f5dd1a35e98c1be86aa242469a7ba1e735e7ff6e5311094061a 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:22.606 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:22.867 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.uwl 00:16:22.867 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.uwl 00:16:22.867 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.uwl 00:16:22.867 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:22.867 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:22.868 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:22.868 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:22.868 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:22.868 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:22.868 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:22.868 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=56175b6398ad96924d3810d529eea250 00:16:22.868 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:22.868 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.JKf 00:16:22.868 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 56175b6398ad96924d3810d529eea250 1 00:16:22.868 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 56175b6398ad96924d3810d529eea250 1 00:16:22.868 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:22.868 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:22.868 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=56175b6398ad96924d3810d529eea250 00:16:22.868 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:22.868 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:22.868 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.JKf 00:16:22.868 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.JKf 00:16:22.868 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.JKf 00:16:22.868 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:22.868 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:22.868 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:22.868 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:22.868 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=74565d9e4d5469735880abe7b91c7dc1add117800c71e5db 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Pg1 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 74565d9e4d5469735880abe7b91c7dc1add117800c71e5db 2 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 74565d9e4d5469735880abe7b91c7dc1add117800c71e5db 2 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=74565d9e4d5469735880abe7b91c7dc1add117800c71e5db 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Pg1 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Pg1 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Pg1 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=078fa6aae1666e289ca629a5bae6f0bee2122f50bba62f7e 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.4Cg 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 078fa6aae1666e289ca629a5bae6f0bee2122f50bba62f7e 2 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 078fa6aae1666e289ca629a5bae6f0bee2122f50bba62f7e 2 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=078fa6aae1666e289ca629a5bae6f0bee2122f50bba62f7e 00:16:22.869 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.4Cg 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.4Cg 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.4Cg 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1e5b1979abc8b06512b7d8a53b091d7b 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.TPd 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1e5b1979abc8b06512b7d8a53b091d7b 1 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1e5b1979abc8b06512b7d8a53b091d7b 1 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1e5b1979abc8b06512b7d8a53b091d7b 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.TPd 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.TPd 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.TPd 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2e2c658402def2ed17311f03ea5d75cf3a36e8d118834d3fbeb2d9e5e0c2c2a3 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.fzN 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2e2c658402def2ed17311f03ea5d75cf3a36e8d118834d3fbeb2d9e5e0c2c2a3 3 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2e2c658402def2ed17311f03ea5d75cf3a36e8d118834d3fbeb2d9e5e0c2c2a3 3 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:22.870 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:22.871 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2e2c658402def2ed17311f03ea5d75cf3a36e8d118834d3fbeb2d9e5e0c2c2a3 00:16:22.871 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:22.871 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:23.128 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.fzN 00:16:23.128 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.fzN 00:16:23.128 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.fzN 00:16:23.128 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:23.128 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1330499 00:16:23.128 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1330499 ']' 00:16:23.128 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.128 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.128 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.128 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.128 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.385 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.385 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:23.385 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1330528 /var/tmp/host.sock 00:16:23.385 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1330528 ']' 00:16:23.385 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:23.385 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.385 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:23.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:23.385 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.385 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.642 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.642 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:23.642 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:23.642 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.642 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.642 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.642 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:23.642 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vIK 00:16:23.642 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.642 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.642 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.643 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.vIK 00:16:23.643 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.vIK 00:16:23.900 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.uwl ]] 00:16:23.900 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uwl 00:16:23.900 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.900 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.900 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.900 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uwl 00:16:23.900 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uwl 00:16:24.158 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:24.158 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.JKf 00:16:24.158 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.158 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.158 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.158 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.JKf 00:16:24.158 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.JKf 00:16:24.415 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Pg1 ]] 00:16:24.415 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Pg1 00:16:24.415 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.415 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.415 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.415 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Pg1 00:16:24.415 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Pg1 00:16:24.672 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:24.672 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.4Cg 00:16:24.672 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.672 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.672 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.672 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.4Cg 00:16:24.672 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.4Cg 00:16:24.930 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.TPd ]] 00:16:24.930 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TPd 00:16:24.930 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.930 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.930 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.930 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TPd 00:16:24.930 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TPd 00:16:25.187 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:25.187 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.fzN 00:16:25.187 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.187 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.187 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.187 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.fzN 00:16:25.187 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.fzN 00:16:25.444 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:25.444 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:25.444 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:25.444 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.444 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:25.444 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:25.702 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:25.702 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.702 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.702 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:25.702 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:25.702 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.702 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.702 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.702 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.702 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.702 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.702 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.702 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.266 00:16:26.266 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.266 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.266 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.524 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.524 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.524 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.524 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.524 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.524 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.524 { 00:16:26.524 "cntlid": 1, 00:16:26.524 "qid": 0, 00:16:26.524 "state": "enabled", 00:16:26.524 "thread": "nvmf_tgt_poll_group_000", 00:16:26.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:26.524 "listen_address": { 00:16:26.524 "trtype": "TCP", 00:16:26.524 "adrfam": "IPv4", 00:16:26.524 "traddr": "10.0.0.2", 00:16:26.524 "trsvcid": "4420" 00:16:26.524 }, 00:16:26.524 "peer_address": { 00:16:26.524 "trtype": "TCP", 00:16:26.524 "adrfam": "IPv4", 00:16:26.524 "traddr": "10.0.0.1", 00:16:26.524 "trsvcid": "52848" 00:16:26.524 }, 00:16:26.524 "auth": { 00:16:26.524 "state": "completed", 00:16:26.524 "digest": "sha256", 00:16:26.524 "dhgroup": "null" 00:16:26.524 } 00:16:26.524 } 00:16:26.524 ]' 00:16:26.524 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.524 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.524 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.524 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:26.524 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.524 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.524 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.524 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.780 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:16:26.780 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:16:27.710 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.710 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:27.711 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.711 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.711 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.711 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.711 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:27.711 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:27.967 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:27.967 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.967 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:27.967 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:27.967 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:27.967 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.967 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.967 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.967 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.967 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.968 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.968 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.968 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.225 00:16:28.225 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.226 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.226 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.483 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.483 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.483 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.483 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.483 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.483 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.483 { 00:16:28.483 "cntlid": 3, 00:16:28.483 "qid": 0, 00:16:28.483 "state": "enabled", 00:16:28.483 "thread": "nvmf_tgt_poll_group_000", 00:16:28.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:28.483 "listen_address": { 00:16:28.483 "trtype": "TCP", 00:16:28.483 "adrfam": "IPv4", 00:16:28.483 "traddr": "10.0.0.2", 00:16:28.483 "trsvcid": "4420" 00:16:28.483 }, 00:16:28.483 "peer_address": { 00:16:28.483 "trtype": "TCP", 00:16:28.483 "adrfam": "IPv4", 00:16:28.483 "traddr": "10.0.0.1", 00:16:28.483 "trsvcid": "52876" 00:16:28.483 }, 00:16:28.483 "auth": { 00:16:28.483 "state": "completed", 00:16:28.483 "digest": "sha256", 00:16:28.483 "dhgroup": "null" 00:16:28.483 } 00:16:28.483 } 00:16:28.483 ]' 00:16:28.483 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.742 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.742 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.742 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:28.742 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.742 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.742 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.742 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.000 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:16:29.000 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:16:29.931 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.931 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:29.931 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.931 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.931 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.931 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.931 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:29.931 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:30.188 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:30.188 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.188 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:30.188 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:30.188 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:30.188 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.188 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.188 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.188 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.188 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.188 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.188 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.188 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.443 00:16:30.443 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.443 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.443 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.008 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.008 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.008 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.008 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.008 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.008 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.008 { 00:16:31.008 "cntlid": 5, 00:16:31.008 "qid": 0, 00:16:31.008 "state": "enabled", 00:16:31.008 "thread": "nvmf_tgt_poll_group_000", 00:16:31.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:31.008 "listen_address": { 00:16:31.008 "trtype": "TCP", 00:16:31.008 "adrfam": "IPv4", 00:16:31.008 "traddr": "10.0.0.2", 00:16:31.008 "trsvcid": "4420" 00:16:31.008 }, 00:16:31.008 "peer_address": { 00:16:31.008 "trtype": "TCP", 00:16:31.008 "adrfam": "IPv4", 00:16:31.008 "traddr": "10.0.0.1", 00:16:31.008 "trsvcid": "58386" 00:16:31.008 }, 00:16:31.008 "auth": { 00:16:31.008 "state": "completed", 00:16:31.008 "digest": "sha256", 00:16:31.008 "dhgroup": "null" 00:16:31.008 } 00:16:31.008 } 00:16:31.008 ]' 00:16:31.008 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.008 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.008 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.008 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:31.008 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.008 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.008 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.008 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.265 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:16:31.265 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:16:32.197 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.197 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:32.197 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.197 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.197 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.197 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.197 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:32.197 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:32.455 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:32.455 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.455 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.455 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:32.455 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:32.455 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.455 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:32.455 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.455 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.455 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.455 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:32.455 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.455 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.713 00:16:32.713 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.713 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.713 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.971 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.971 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.971 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.971 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.971 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.971 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.971 { 00:16:32.971 "cntlid": 7, 00:16:32.971 "qid": 0, 00:16:32.971 "state": "enabled", 00:16:32.971 "thread": "nvmf_tgt_poll_group_000", 00:16:32.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:32.971 "listen_address": { 00:16:32.971 "trtype": "TCP", 00:16:32.971 "adrfam": "IPv4", 00:16:32.971 "traddr": "10.0.0.2", 00:16:32.971 "trsvcid": "4420" 00:16:32.971 }, 00:16:32.971 "peer_address": { 00:16:32.971 "trtype": "TCP", 00:16:32.971 "adrfam": "IPv4", 00:16:32.971 "traddr": "10.0.0.1", 00:16:32.971 "trsvcid": "58422" 00:16:32.971 }, 00:16:32.971 "auth": { 00:16:32.971 "state": "completed", 00:16:32.971 "digest": "sha256", 00:16:32.971 "dhgroup": "null" 00:16:32.971 } 00:16:32.971 } 00:16:32.971 ]' 00:16:32.971 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.228 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.228 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.228 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:33.228 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.228 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.228 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.228 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.486 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:16:33.486 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:16:34.417 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.417 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:34.417 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.417 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.417 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.417 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.417 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.417 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.417 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.674 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:34.674 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.674 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.674 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:34.674 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:34.674 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.674 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.674 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.674 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.674 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.674 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.674 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.674 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.932 00:16:34.932 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.932 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.932 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.189 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.189 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.189 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.189 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.189 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.189 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.189 { 00:16:35.189 "cntlid": 9, 00:16:35.189 "qid": 0, 00:16:35.189 "state": "enabled", 00:16:35.189 "thread": "nvmf_tgt_poll_group_000", 00:16:35.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:35.189 "listen_address": { 00:16:35.189 "trtype": "TCP", 00:16:35.189 "adrfam": "IPv4", 00:16:35.189 "traddr": "10.0.0.2", 00:16:35.189 "trsvcid": "4420" 00:16:35.189 }, 00:16:35.189 "peer_address": { 00:16:35.189 "trtype": "TCP", 00:16:35.189 "adrfam": "IPv4", 00:16:35.189 "traddr": "10.0.0.1", 00:16:35.189 "trsvcid": "58436" 00:16:35.189 }, 00:16:35.189 "auth": { 00:16:35.189 "state": "completed", 00:16:35.189 "digest": "sha256", 00:16:35.189 "dhgroup": "ffdhe2048" 00:16:35.189 } 00:16:35.189 } 00:16:35.189 ]' 00:16:35.189 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.447 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.447 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.447 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:35.447 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.447 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.447 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.447 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.705 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:16:35.705 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:16:36.639 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.639 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:36.639 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.639 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.639 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.639 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.639 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:36.639 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:36.897 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:36.897 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.897 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:36.897 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:36.897 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:36.897 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.897 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.897 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.897 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.897 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.897 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.897 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.897 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.462 00:16:37.462 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.462 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.462 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.720 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.720 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.720 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.720 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.720 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.720 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.720 { 00:16:37.720 "cntlid": 11, 00:16:37.720 "qid": 0, 00:16:37.720 "state": "enabled", 00:16:37.720 "thread": "nvmf_tgt_poll_group_000", 00:16:37.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:37.720 "listen_address": { 00:16:37.720 "trtype": "TCP", 00:16:37.720 "adrfam": "IPv4", 00:16:37.720 "traddr": "10.0.0.2", 00:16:37.720 "trsvcid": "4420" 00:16:37.720 }, 00:16:37.720 "peer_address": { 00:16:37.720 "trtype": "TCP", 00:16:37.720 "adrfam": "IPv4", 00:16:37.720 "traddr": "10.0.0.1", 00:16:37.720 "trsvcid": "58456" 00:16:37.720 }, 00:16:37.720 "auth": { 00:16:37.720 "state": "completed", 00:16:37.720 "digest": "sha256", 00:16:37.720 "dhgroup": "ffdhe2048" 00:16:37.720 } 00:16:37.720 } 00:16:37.720 ]' 00:16:37.720 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.720 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.720 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.720 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:37.720 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.720 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.720 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.720 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.978 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:16:37.978 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:16:38.910 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.910 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:38.910 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.910 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.910 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.910 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.910 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:38.910 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:39.168 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:39.168 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.168 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.168 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:39.168 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:39.168 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.168 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.168 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.168 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.168 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.168 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.168 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.168 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.425 00:16:39.682 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.682 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.682 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.946 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.946 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.946 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.946 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.946 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.946 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.946 { 00:16:39.946 "cntlid": 13, 00:16:39.946 "qid": 0, 00:16:39.946 "state": "enabled", 00:16:39.946 "thread": "nvmf_tgt_poll_group_000", 00:16:39.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:39.946 "listen_address": { 00:16:39.946 "trtype": "TCP", 00:16:39.946 "adrfam": "IPv4", 00:16:39.946 "traddr": "10.0.0.2", 00:16:39.946 "trsvcid": "4420" 00:16:39.946 }, 00:16:39.946 "peer_address": { 00:16:39.946 "trtype": "TCP", 00:16:39.946 "adrfam": "IPv4", 00:16:39.946 "traddr": "10.0.0.1", 00:16:39.946 "trsvcid": "55752" 00:16:39.946 }, 00:16:39.946 "auth": { 00:16:39.946 "state": "completed", 00:16:39.946 "digest": "sha256", 00:16:39.946 "dhgroup": "ffdhe2048" 00:16:39.946 } 00:16:39.946 } 00:16:39.946 ]' 00:16:39.946 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.946 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.946 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.946 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:39.946 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.946 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.946 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.946 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.260 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:16:40.260 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:16:41.220 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.221 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:41.221 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.221 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.221 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.221 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.221 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.221 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.479 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:41.479 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.479 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.479 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:41.479 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:41.479 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.479 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:41.479 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.479 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.479 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.479 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:41.479 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.479 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.737 00:16:41.737 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.737 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.737 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.995 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.995 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.995 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.995 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.995 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.995 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.995 { 00:16:41.995 "cntlid": 15, 00:16:41.995 "qid": 0, 00:16:41.995 "state": "enabled", 00:16:41.995 "thread": "nvmf_tgt_poll_group_000", 00:16:41.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:41.995 "listen_address": { 00:16:41.995 "trtype": "TCP", 00:16:41.995 "adrfam": "IPv4", 00:16:41.995 "traddr": "10.0.0.2", 00:16:41.995 "trsvcid": "4420" 00:16:41.995 }, 00:16:41.995 "peer_address": { 00:16:41.995 "trtype": "TCP", 00:16:41.995 "adrfam": "IPv4", 00:16:41.995 "traddr": "10.0.0.1", 00:16:41.995 "trsvcid": "55784" 00:16:41.995 }, 00:16:41.995 "auth": { 00:16:41.995 "state": "completed", 00:16:41.995 "digest": "sha256", 00:16:41.995 "dhgroup": "ffdhe2048" 00:16:41.995 } 00:16:41.995 } 00:16:41.995 ]' 00:16:41.995 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.253 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.253 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.253 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:42.253 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.253 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.253 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.253 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.511 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:16:42.511 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:16:43.443 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.443 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:43.443 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.443 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.443 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.443 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.443 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.443 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.443 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.700 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:43.700 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.700 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.700 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:43.700 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:43.700 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.700 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.700 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.700 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.700 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.700 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.700 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.700 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.957 00:16:43.957 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.957 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.957 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.214 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.214 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.214 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.214 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.214 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.214 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.214 { 00:16:44.214 "cntlid": 17, 00:16:44.214 "qid": 0, 00:16:44.214 "state": "enabled", 00:16:44.214 "thread": "nvmf_tgt_poll_group_000", 00:16:44.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:44.214 "listen_address": { 00:16:44.214 "trtype": "TCP", 00:16:44.214 "adrfam": "IPv4", 00:16:44.214 "traddr": "10.0.0.2", 00:16:44.214 "trsvcid": "4420" 00:16:44.214 }, 00:16:44.214 "peer_address": { 00:16:44.214 "trtype": "TCP", 00:16:44.214 "adrfam": "IPv4", 00:16:44.214 "traddr": "10.0.0.1", 00:16:44.214 "trsvcid": "55804" 00:16:44.214 }, 00:16:44.214 "auth": { 00:16:44.214 "state": "completed", 00:16:44.214 "digest": "sha256", 00:16:44.214 "dhgroup": "ffdhe3072" 00:16:44.214 } 00:16:44.214 } 00:16:44.214 ]' 00:16:44.214 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.472 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.472 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.472 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.472 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.472 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.472 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.472 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.729 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:16:44.729 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:16:45.661 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.661 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:45.661 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.661 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.661 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.661 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.661 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:45.661 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:45.918 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:45.918 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.918 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.918 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:45.918 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:45.918 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.918 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.918 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.918 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.918 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.918 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.918 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.918 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.176 00:16:46.176 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.176 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.176 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.432 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.432 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.432 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.432 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.689 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.689 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.689 { 00:16:46.689 "cntlid": 19, 00:16:46.689 "qid": 0, 00:16:46.689 "state": "enabled", 00:16:46.689 "thread": "nvmf_tgt_poll_group_000", 00:16:46.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:46.689 "listen_address": { 00:16:46.689 "trtype": "TCP", 00:16:46.689 "adrfam": "IPv4", 00:16:46.689 "traddr": "10.0.0.2", 00:16:46.689 "trsvcid": "4420" 00:16:46.689 }, 00:16:46.689 "peer_address": { 00:16:46.689 "trtype": "TCP", 00:16:46.689 "adrfam": "IPv4", 00:16:46.689 "traddr": "10.0.0.1", 00:16:46.689 "trsvcid": "55830" 00:16:46.689 }, 00:16:46.689 "auth": { 00:16:46.689 "state": "completed", 00:16:46.689 "digest": "sha256", 00:16:46.689 "dhgroup": "ffdhe3072" 00:16:46.689 } 00:16:46.689 } 00:16:46.689 ]' 00:16:46.689 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.689 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.689 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.689 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:46.689 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.689 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.689 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.689 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.946 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:16:46.946 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:16:47.879 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.879 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:47.879 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.879 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.879 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.879 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.879 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:47.879 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:48.137 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:48.137 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.137 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.137 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:48.137 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:48.137 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.137 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.137 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.137 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.137 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.137 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.137 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.137 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.394 00:16:48.394 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.394 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.394 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.652 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.652 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.652 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.652 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.910 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.910 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.910 { 00:16:48.910 "cntlid": 21, 00:16:48.910 "qid": 0, 00:16:48.910 "state": "enabled", 00:16:48.910 "thread": "nvmf_tgt_poll_group_000", 00:16:48.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:48.910 "listen_address": { 00:16:48.910 "trtype": "TCP", 00:16:48.910 "adrfam": "IPv4", 00:16:48.910 "traddr": "10.0.0.2", 00:16:48.910 "trsvcid": "4420" 00:16:48.910 }, 00:16:48.910 "peer_address": { 00:16:48.910 "trtype": "TCP", 00:16:48.910 "adrfam": "IPv4", 00:16:48.910 "traddr": "10.0.0.1", 00:16:48.910 "trsvcid": "55874" 00:16:48.910 }, 00:16:48.910 "auth": { 00:16:48.910 "state": "completed", 00:16:48.910 "digest": "sha256", 00:16:48.910 "dhgroup": "ffdhe3072" 00:16:48.910 } 00:16:48.910 } 00:16:48.910 ]' 00:16:48.910 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.910 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.910 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.910 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:48.910 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.910 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.910 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.910 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.167 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:16:49.167 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:16:50.098 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.098 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:50.098 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.098 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.098 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.098 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.098 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.098 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.355 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:50.355 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.355 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.355 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:50.355 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:50.355 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.355 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:50.355 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.355 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.355 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.355 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:50.355 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.355 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.612 00:16:50.612 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.612 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.612 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.869 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.869 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.869 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.869 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.869 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.869 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.869 { 00:16:50.869 "cntlid": 23, 00:16:50.869 "qid": 0, 00:16:50.869 "state": "enabled", 00:16:50.869 "thread": "nvmf_tgt_poll_group_000", 00:16:50.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:50.869 "listen_address": { 00:16:50.869 "trtype": "TCP", 00:16:50.869 "adrfam": "IPv4", 00:16:50.869 "traddr": "10.0.0.2", 00:16:50.869 "trsvcid": "4420" 00:16:50.869 }, 00:16:50.869 "peer_address": { 00:16:50.869 "trtype": "TCP", 00:16:50.869 "adrfam": "IPv4", 00:16:50.869 "traddr": "10.0.0.1", 00:16:50.869 "trsvcid": "37128" 00:16:50.869 }, 00:16:50.869 "auth": { 00:16:50.869 "state": "completed", 00:16:50.869 "digest": "sha256", 00:16:50.869 "dhgroup": "ffdhe3072" 00:16:50.869 } 00:16:50.869 } 00:16:50.869 ]' 00:16:50.869 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.126 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.126 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.126 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:51.126 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.126 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.126 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.126 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.383 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:16:51.383 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:16:52.351 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.351 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:52.351 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.351 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.351 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.351 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.351 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.351 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:52.351 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:52.608 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:52.608 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.608 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.608 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:52.608 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:52.608 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.608 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.608 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.608 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.608 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.608 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.608 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.608 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.169 00:16:53.169 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.169 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.169 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.426 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.426 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.426 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.426 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.426 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.426 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.426 { 00:16:53.426 "cntlid": 25, 00:16:53.426 "qid": 0, 00:16:53.426 "state": "enabled", 00:16:53.426 "thread": "nvmf_tgt_poll_group_000", 00:16:53.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:53.426 "listen_address": { 00:16:53.426 "trtype": "TCP", 00:16:53.426 "adrfam": "IPv4", 00:16:53.426 "traddr": "10.0.0.2", 00:16:53.426 "trsvcid": "4420" 00:16:53.426 }, 00:16:53.426 "peer_address": { 00:16:53.426 "trtype": "TCP", 00:16:53.426 "adrfam": "IPv4", 00:16:53.426 "traddr": "10.0.0.1", 00:16:53.426 "trsvcid": "37152" 00:16:53.426 }, 00:16:53.426 "auth": { 00:16:53.426 "state": "completed", 00:16:53.426 "digest": "sha256", 00:16:53.426 "dhgroup": "ffdhe4096" 00:16:53.426 } 00:16:53.426 } 00:16:53.426 ]' 00:16:53.426 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.426 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.426 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.426 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:53.426 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.426 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.426 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.426 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.684 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:16:53.684 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:16:54.617 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.617 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:54.617 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.617 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.617 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.617 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.617 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:54.617 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:54.874 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:54.874 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.874 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.874 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:54.874 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:54.874 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.874 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.874 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.874 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.874 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.874 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.875 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.875 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.440 00:16:55.440 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.440 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.440 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.698 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.698 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.698 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.698 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.698 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.698 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.698 { 00:16:55.698 "cntlid": 27, 00:16:55.698 "qid": 0, 00:16:55.698 "state": "enabled", 00:16:55.698 "thread": "nvmf_tgt_poll_group_000", 00:16:55.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:55.698 "listen_address": { 00:16:55.698 "trtype": "TCP", 00:16:55.698 "adrfam": "IPv4", 00:16:55.698 "traddr": "10.0.0.2", 00:16:55.698 "trsvcid": "4420" 00:16:55.698 }, 00:16:55.698 "peer_address": { 00:16:55.698 "trtype": "TCP", 00:16:55.698 "adrfam": "IPv4", 00:16:55.698 "traddr": "10.0.0.1", 00:16:55.698 "trsvcid": "37182" 00:16:55.698 }, 00:16:55.698 "auth": { 00:16:55.698 "state": "completed", 00:16:55.698 "digest": "sha256", 00:16:55.698 "dhgroup": "ffdhe4096" 00:16:55.698 } 00:16:55.698 } 00:16:55.698 ]' 00:16:55.698 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.698 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.698 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.698 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:55.698 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.698 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.698 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.698 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.956 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:16:55.956 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:16:56.888 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.888 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:56.888 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.888 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.888 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.888 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.888 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:56.888 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:57.146 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:57.146 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.146 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:57.146 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:57.146 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:57.146 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.146 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.146 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.146 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.146 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.146 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.146 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.146 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.711 00:16:57.711 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.711 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.711 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.968 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.968 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.968 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.968 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.968 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.968 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.968 { 00:16:57.968 "cntlid": 29, 00:16:57.968 "qid": 0, 00:16:57.968 "state": "enabled", 00:16:57.968 "thread": "nvmf_tgt_poll_group_000", 00:16:57.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:57.968 "listen_address": { 00:16:57.968 "trtype": "TCP", 00:16:57.968 "adrfam": "IPv4", 00:16:57.968 "traddr": "10.0.0.2", 00:16:57.968 "trsvcid": "4420" 00:16:57.968 }, 00:16:57.968 "peer_address": { 00:16:57.968 "trtype": "TCP", 00:16:57.968 "adrfam": "IPv4", 00:16:57.968 "traddr": "10.0.0.1", 00:16:57.968 "trsvcid": "37204" 00:16:57.968 }, 00:16:57.968 "auth": { 00:16:57.968 "state": "completed", 00:16:57.968 "digest": "sha256", 00:16:57.968 "dhgroup": "ffdhe4096" 00:16:57.968 } 00:16:57.968 } 00:16:57.968 ]' 00:16:57.968 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.968 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.968 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.968 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:57.968 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.968 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.968 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.968 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.225 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:16:58.225 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:16:59.157 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.157 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:59.157 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.157 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.157 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.157 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.157 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:59.157 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:59.723 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:59.723 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.723 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:59.723 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:59.723 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:59.723 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.723 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:59.723 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.723 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.723 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.723 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:59.723 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.723 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.980 00:16:59.980 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.980 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.980 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.238 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.238 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.238 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.238 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.238 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.238 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.238 { 00:17:00.238 "cntlid": 31, 00:17:00.238 "qid": 0, 00:17:00.238 "state": "enabled", 00:17:00.238 "thread": "nvmf_tgt_poll_group_000", 00:17:00.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:00.238 "listen_address": { 00:17:00.238 "trtype": "TCP", 00:17:00.238 "adrfam": "IPv4", 00:17:00.238 "traddr": "10.0.0.2", 00:17:00.238 "trsvcid": "4420" 00:17:00.238 }, 00:17:00.238 "peer_address": { 00:17:00.238 "trtype": "TCP", 00:17:00.238 "adrfam": "IPv4", 00:17:00.238 "traddr": "10.0.0.1", 00:17:00.238 "trsvcid": "43270" 00:17:00.238 }, 00:17:00.238 "auth": { 00:17:00.238 "state": "completed", 00:17:00.238 "digest": "sha256", 00:17:00.238 "dhgroup": "ffdhe4096" 00:17:00.238 } 00:17:00.238 } 00:17:00.238 ]' 00:17:00.238 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.495 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.495 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.495 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:00.495 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.495 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.495 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.496 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.752 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:17:00.752 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:17:01.691 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.691 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:01.691 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.691 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.691 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.691 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.691 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.691 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:01.691 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:01.953 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:01.953 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.953 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:01.953 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:01.953 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:01.953 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.953 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.953 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.953 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.953 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.953 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.953 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.953 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.519 00:17:02.519 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.519 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.519 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.777 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.777 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.777 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.777 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.777 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.777 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.777 { 00:17:02.777 "cntlid": 33, 00:17:02.777 "qid": 0, 00:17:02.777 "state": "enabled", 00:17:02.777 "thread": "nvmf_tgt_poll_group_000", 00:17:02.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:02.777 "listen_address": { 00:17:02.777 "trtype": "TCP", 00:17:02.777 "adrfam": "IPv4", 00:17:02.777 "traddr": "10.0.0.2", 00:17:02.777 "trsvcid": "4420" 00:17:02.777 }, 00:17:02.777 "peer_address": { 00:17:02.777 "trtype": "TCP", 00:17:02.777 "adrfam": "IPv4", 00:17:02.777 "traddr": "10.0.0.1", 00:17:02.777 "trsvcid": "43296" 00:17:02.777 }, 00:17:02.777 "auth": { 00:17:02.777 "state": "completed", 00:17:02.777 "digest": "sha256", 00:17:02.777 "dhgroup": "ffdhe6144" 00:17:02.777 } 00:17:02.777 } 00:17:02.777 ]' 00:17:02.777 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.777 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.777 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.777 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:02.777 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.777 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.777 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.777 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.035 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:17:03.035 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:17:03.968 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.968 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:03.968 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.968 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.968 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.968 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.968 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:03.968 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:04.225 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:04.225 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.225 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:04.225 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:04.225 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:04.225 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.225 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.225 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.225 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.225 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.225 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.225 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.225 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.158 00:17:05.158 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.158 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.158 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.158 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.158 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.158 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.158 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.158 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.158 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.158 { 00:17:05.158 "cntlid": 35, 00:17:05.159 "qid": 0, 00:17:05.159 "state": "enabled", 00:17:05.159 "thread": "nvmf_tgt_poll_group_000", 00:17:05.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:05.159 "listen_address": { 00:17:05.159 "trtype": "TCP", 00:17:05.159 "adrfam": "IPv4", 00:17:05.159 "traddr": "10.0.0.2", 00:17:05.159 "trsvcid": "4420" 00:17:05.159 }, 00:17:05.159 "peer_address": { 00:17:05.159 "trtype": "TCP", 00:17:05.159 "adrfam": "IPv4", 00:17:05.159 "traddr": "10.0.0.1", 00:17:05.159 "trsvcid": "43324" 00:17:05.159 }, 00:17:05.159 "auth": { 00:17:05.159 "state": "completed", 00:17:05.159 "digest": "sha256", 00:17:05.159 "dhgroup": "ffdhe6144" 00:17:05.159 } 00:17:05.159 } 00:17:05.159 ]' 00:17:05.159 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.159 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.159 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.416 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:05.416 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.416 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.416 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.416 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.674 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:17:05.674 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:17:06.607 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.607 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:06.607 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.607 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.607 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.607 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.608 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:06.608 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:06.865 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:06.865 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.865 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:06.865 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:06.865 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:06.865 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.865 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.865 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.865 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.865 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.865 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.865 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.865 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.429 00:17:07.429 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.429 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.429 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.687 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.687 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.687 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.687 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.687 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.687 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.687 { 00:17:07.687 "cntlid": 37, 00:17:07.687 "qid": 0, 00:17:07.687 "state": "enabled", 00:17:07.687 "thread": "nvmf_tgt_poll_group_000", 00:17:07.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:07.687 "listen_address": { 00:17:07.687 "trtype": "TCP", 00:17:07.687 "adrfam": "IPv4", 00:17:07.687 "traddr": "10.0.0.2", 00:17:07.687 "trsvcid": "4420" 00:17:07.687 }, 00:17:07.687 "peer_address": { 00:17:07.687 "trtype": "TCP", 00:17:07.687 "adrfam": "IPv4", 00:17:07.687 "traddr": "10.0.0.1", 00:17:07.687 "trsvcid": "43362" 00:17:07.687 }, 00:17:07.687 "auth": { 00:17:07.687 "state": "completed", 00:17:07.687 "digest": "sha256", 00:17:07.687 "dhgroup": "ffdhe6144" 00:17:07.687 } 00:17:07.687 } 00:17:07.687 ]' 00:17:07.687 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.687 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:07.687 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.944 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:07.944 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.944 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.944 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.944 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.202 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:17:08.202 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:17:09.134 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.134 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:09.134 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.134 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.134 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.134 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.134 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:09.134 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:09.392 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:09.392 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.392 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:09.392 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:09.392 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:09.392 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.392 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:09.392 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.392 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.392 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.392 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:09.393 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.393 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.957 00:17:09.957 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.957 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.957 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.246 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.246 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.246 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.246 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.246 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.246 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.246 { 00:17:10.246 "cntlid": 39, 00:17:10.246 "qid": 0, 00:17:10.246 "state": "enabled", 00:17:10.246 "thread": "nvmf_tgt_poll_group_000", 00:17:10.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:10.246 "listen_address": { 00:17:10.246 "trtype": "TCP", 00:17:10.246 "adrfam": "IPv4", 00:17:10.246 "traddr": "10.0.0.2", 00:17:10.246 "trsvcid": "4420" 00:17:10.246 }, 00:17:10.246 "peer_address": { 00:17:10.246 "trtype": "TCP", 00:17:10.246 "adrfam": "IPv4", 00:17:10.246 "traddr": "10.0.0.1", 00:17:10.246 "trsvcid": "33792" 00:17:10.246 }, 00:17:10.246 "auth": { 00:17:10.246 "state": "completed", 00:17:10.246 "digest": "sha256", 00:17:10.246 "dhgroup": "ffdhe6144" 00:17:10.246 } 00:17:10.246 } 00:17:10.246 ]' 00:17:10.246 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.246 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:10.246 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.246 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:10.246 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.246 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.246 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.246 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.529 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:17:10.529 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:17:11.460 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.460 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:11.460 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.460 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.460 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.460 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.460 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.460 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:11.460 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:11.718 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:11.718 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.718 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:11.718 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:11.718 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:11.718 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.718 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.718 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.718 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.718 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.718 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.718 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.718 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.650 00:17:12.650 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.650 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.650 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.907 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.907 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.907 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.907 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.907 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.907 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.907 { 00:17:12.907 "cntlid": 41, 00:17:12.907 "qid": 0, 00:17:12.907 "state": "enabled", 00:17:12.907 "thread": "nvmf_tgt_poll_group_000", 00:17:12.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:12.907 "listen_address": { 00:17:12.907 "trtype": "TCP", 00:17:12.907 "adrfam": "IPv4", 00:17:12.907 "traddr": "10.0.0.2", 00:17:12.907 "trsvcid": "4420" 00:17:12.907 }, 00:17:12.907 "peer_address": { 00:17:12.907 "trtype": "TCP", 00:17:12.907 "adrfam": "IPv4", 00:17:12.907 "traddr": "10.0.0.1", 00:17:12.907 "trsvcid": "33838" 00:17:12.907 }, 00:17:12.907 "auth": { 00:17:12.907 "state": "completed", 00:17:12.907 "digest": "sha256", 00:17:12.907 "dhgroup": "ffdhe8192" 00:17:12.907 } 00:17:12.907 } 00:17:12.907 ]' 00:17:12.907 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.907 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.907 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.907 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:12.907 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.164 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.164 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.164 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.421 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:17:13.421 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:17:14.353 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.353 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:14.353 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.353 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.353 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.353 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.353 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:14.353 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:14.610 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:14.610 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.610 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:14.610 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:14.610 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:14.610 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.610 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.610 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.610 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.610 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.610 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.610 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.610 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.541 00:17:15.541 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.541 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.541 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.541 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.541 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.541 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.541 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.541 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.541 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.541 { 00:17:15.541 "cntlid": 43, 00:17:15.541 "qid": 0, 00:17:15.541 "state": "enabled", 00:17:15.541 "thread": "nvmf_tgt_poll_group_000", 00:17:15.541 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:15.541 "listen_address": { 00:17:15.541 "trtype": "TCP", 00:17:15.541 "adrfam": "IPv4", 00:17:15.541 "traddr": "10.0.0.2", 00:17:15.541 "trsvcid": "4420" 00:17:15.541 }, 00:17:15.541 "peer_address": { 00:17:15.541 "trtype": "TCP", 00:17:15.541 "adrfam": "IPv4", 00:17:15.541 "traddr": "10.0.0.1", 00:17:15.541 "trsvcid": "33872" 00:17:15.541 }, 00:17:15.541 "auth": { 00:17:15.541 "state": "completed", 00:17:15.541 "digest": "sha256", 00:17:15.541 "dhgroup": "ffdhe8192" 00:17:15.541 } 00:17:15.541 } 00:17:15.541 ]' 00:17:15.541 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.798 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.798 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.798 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:15.798 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.798 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.798 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.798 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.055 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:17:16.055 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:17:16.987 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.987 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:16.987 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.987 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.987 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.987 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.987 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:16.987 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:17.244 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:17.244 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.244 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:17.244 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:17.244 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:17.244 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.244 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.244 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.244 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.244 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.244 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.244 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.244 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.176 00:17:18.176 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.176 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.176 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.433 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.433 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.433 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.433 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.433 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.433 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.433 { 00:17:18.433 "cntlid": 45, 00:17:18.433 "qid": 0, 00:17:18.433 "state": "enabled", 00:17:18.433 "thread": "nvmf_tgt_poll_group_000", 00:17:18.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:18.433 "listen_address": { 00:17:18.433 "trtype": "TCP", 00:17:18.433 "adrfam": "IPv4", 00:17:18.433 "traddr": "10.0.0.2", 00:17:18.433 "trsvcid": "4420" 00:17:18.433 }, 00:17:18.433 "peer_address": { 00:17:18.433 "trtype": "TCP", 00:17:18.433 "adrfam": "IPv4", 00:17:18.433 "traddr": "10.0.0.1", 00:17:18.433 "trsvcid": "33882" 00:17:18.433 }, 00:17:18.433 "auth": { 00:17:18.433 "state": "completed", 00:17:18.433 "digest": "sha256", 00:17:18.433 "dhgroup": "ffdhe8192" 00:17:18.433 } 00:17:18.433 } 00:17:18.433 ]' 00:17:18.433 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.433 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.433 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.433 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:18.433 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.433 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.433 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.433 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.691 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:17:18.691 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:17:19.623 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.623 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:19.623 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.623 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.623 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.623 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.623 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:19.623 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:19.880 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:19.880 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.880 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:19.880 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:19.880 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:19.880 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.880 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:19.880 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.880 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.880 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.880 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:19.880 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.880 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.812 00:17:20.812 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.812 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.812 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.069 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.069 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.069 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.069 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.069 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.069 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.069 { 00:17:21.069 "cntlid": 47, 00:17:21.069 "qid": 0, 00:17:21.069 "state": "enabled", 00:17:21.069 "thread": "nvmf_tgt_poll_group_000", 00:17:21.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:21.069 "listen_address": { 00:17:21.069 "trtype": "TCP", 00:17:21.069 "adrfam": "IPv4", 00:17:21.069 "traddr": "10.0.0.2", 00:17:21.069 "trsvcid": "4420" 00:17:21.069 }, 00:17:21.069 "peer_address": { 00:17:21.069 "trtype": "TCP", 00:17:21.069 "adrfam": "IPv4", 00:17:21.070 "traddr": "10.0.0.1", 00:17:21.070 "trsvcid": "58134" 00:17:21.070 }, 00:17:21.070 "auth": { 00:17:21.070 "state": "completed", 00:17:21.070 "digest": "sha256", 00:17:21.070 "dhgroup": "ffdhe8192" 00:17:21.070 } 00:17:21.070 } 00:17:21.070 ]' 00:17:21.070 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.070 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.070 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.327 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:21.327 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.327 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.327 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.327 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.584 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:17:21.584 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:17:22.517 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.517 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:22.517 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.517 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.517 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.517 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:22.517 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.517 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.517 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:22.517 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:22.774 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:22.774 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.774 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:22.774 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:22.774 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:22.774 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.774 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.774 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.774 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.774 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.774 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.774 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.774 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.031 00:17:23.031 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.031 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.031 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.288 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.288 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.288 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.288 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.288 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.288 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.288 { 00:17:23.288 "cntlid": 49, 00:17:23.288 "qid": 0, 00:17:23.288 "state": "enabled", 00:17:23.288 "thread": "nvmf_tgt_poll_group_000", 00:17:23.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:23.288 "listen_address": { 00:17:23.288 "trtype": "TCP", 00:17:23.288 "adrfam": "IPv4", 00:17:23.288 "traddr": "10.0.0.2", 00:17:23.288 "trsvcid": "4420" 00:17:23.288 }, 00:17:23.288 "peer_address": { 00:17:23.288 "trtype": "TCP", 00:17:23.288 "adrfam": "IPv4", 00:17:23.288 "traddr": "10.0.0.1", 00:17:23.288 "trsvcid": "58154" 00:17:23.288 }, 00:17:23.288 "auth": { 00:17:23.288 "state": "completed", 00:17:23.288 "digest": "sha384", 00:17:23.288 "dhgroup": "null" 00:17:23.288 } 00:17:23.288 } 00:17:23.288 ]' 00:17:23.288 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.288 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.288 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.288 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:23.288 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.546 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.546 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.546 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.803 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:17:23.803 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:17:24.735 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.735 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:24.735 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.735 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.735 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.735 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.735 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:24.735 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:24.993 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:24.993 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.993 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.993 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:24.993 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:24.993 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.993 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.993 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.993 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.993 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.993 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.993 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.993 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.250 00:17:25.250 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.250 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.250 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.508 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.508 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.508 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.508 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.508 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.508 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.508 { 00:17:25.508 "cntlid": 51, 00:17:25.508 "qid": 0, 00:17:25.508 "state": "enabled", 00:17:25.508 "thread": "nvmf_tgt_poll_group_000", 00:17:25.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:25.508 "listen_address": { 00:17:25.508 "trtype": "TCP", 00:17:25.508 "adrfam": "IPv4", 00:17:25.508 "traddr": "10.0.0.2", 00:17:25.508 "trsvcid": "4420" 00:17:25.508 }, 00:17:25.508 "peer_address": { 00:17:25.508 "trtype": "TCP", 00:17:25.508 "adrfam": "IPv4", 00:17:25.508 "traddr": "10.0.0.1", 00:17:25.508 "trsvcid": "58184" 00:17:25.508 }, 00:17:25.508 "auth": { 00:17:25.508 "state": "completed", 00:17:25.508 "digest": "sha384", 00:17:25.508 "dhgroup": "null" 00:17:25.508 } 00:17:25.508 } 00:17:25.508 ]' 00:17:25.508 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.508 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.508 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.508 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:25.508 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.508 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.508 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.508 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.072 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:17:26.072 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:17:26.636 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.636 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:26.636 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.636 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.894 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.894 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.894 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:26.894 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:27.152 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:27.152 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.152 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:27.152 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:27.152 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:27.152 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.152 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.152 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.152 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.152 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.152 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.152 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.152 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.409 00:17:27.409 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.409 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.409 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.667 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.667 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.667 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.667 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.667 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.667 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.667 { 00:17:27.667 "cntlid": 53, 00:17:27.667 "qid": 0, 00:17:27.667 "state": "enabled", 00:17:27.667 "thread": "nvmf_tgt_poll_group_000", 00:17:27.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:27.667 "listen_address": { 00:17:27.667 "trtype": "TCP", 00:17:27.667 "adrfam": "IPv4", 00:17:27.667 "traddr": "10.0.0.2", 00:17:27.667 "trsvcid": "4420" 00:17:27.667 }, 00:17:27.667 "peer_address": { 00:17:27.667 "trtype": "TCP", 00:17:27.667 "adrfam": "IPv4", 00:17:27.667 "traddr": "10.0.0.1", 00:17:27.667 "trsvcid": "58212" 00:17:27.667 }, 00:17:27.667 "auth": { 00:17:27.667 "state": "completed", 00:17:27.667 "digest": "sha384", 00:17:27.667 "dhgroup": "null" 00:17:27.667 } 00:17:27.667 } 00:17:27.667 ]' 00:17:27.667 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.667 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.667 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.667 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:27.667 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.667 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.667 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.667 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.232 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:17:28.232 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:17:29.165 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.165 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:29.165 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.165 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.165 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.165 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.165 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:29.165 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:29.165 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:29.165 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.165 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:29.165 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:29.165 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:29.165 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.165 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:29.165 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.165 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.165 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.165 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:29.165 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.165 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.729 00:17:29.729 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.730 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.730 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.987 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.987 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.987 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.987 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.987 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.987 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.987 { 00:17:29.987 "cntlid": 55, 00:17:29.987 "qid": 0, 00:17:29.987 "state": "enabled", 00:17:29.987 "thread": "nvmf_tgt_poll_group_000", 00:17:29.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:29.987 "listen_address": { 00:17:29.987 "trtype": "TCP", 00:17:29.987 "adrfam": "IPv4", 00:17:29.987 "traddr": "10.0.0.2", 00:17:29.987 "trsvcid": "4420" 00:17:29.987 }, 00:17:29.987 "peer_address": { 00:17:29.987 "trtype": "TCP", 00:17:29.987 "adrfam": "IPv4", 00:17:29.987 "traddr": "10.0.0.1", 00:17:29.987 "trsvcid": "51456" 00:17:29.987 }, 00:17:29.987 "auth": { 00:17:29.987 "state": "completed", 00:17:29.987 "digest": "sha384", 00:17:29.987 "dhgroup": "null" 00:17:29.987 } 00:17:29.987 } 00:17:29.987 ]' 00:17:29.987 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.987 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.987 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.987 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:29.987 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.987 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.987 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.987 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.245 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:17:30.245 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:17:31.177 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.177 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:31.177 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.177 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.177 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.177 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.177 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.177 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:31.177 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:31.434 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:31.434 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.434 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.434 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:31.434 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:31.434 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.434 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.434 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.434 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.434 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.434 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.434 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.435 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.694 00:17:31.695 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.695 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.695 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.953 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.953 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.953 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.953 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.953 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.953 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.953 { 00:17:31.953 "cntlid": 57, 00:17:31.953 "qid": 0, 00:17:31.953 "state": "enabled", 00:17:31.953 "thread": "nvmf_tgt_poll_group_000", 00:17:31.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:31.953 "listen_address": { 00:17:31.953 "trtype": "TCP", 00:17:31.953 "adrfam": "IPv4", 00:17:31.953 "traddr": "10.0.0.2", 00:17:31.953 "trsvcid": "4420" 00:17:31.953 }, 00:17:31.953 "peer_address": { 00:17:31.953 "trtype": "TCP", 00:17:31.953 "adrfam": "IPv4", 00:17:31.953 "traddr": "10.0.0.1", 00:17:31.953 "trsvcid": "51488" 00:17:31.953 }, 00:17:31.953 "auth": { 00:17:31.953 "state": "completed", 00:17:31.953 "digest": "sha384", 00:17:31.953 "dhgroup": "ffdhe2048" 00:17:31.953 } 00:17:31.953 } 00:17:31.953 ]' 00:17:31.953 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.953 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.953 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.210 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:32.210 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.210 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.210 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.210 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.467 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:17:32.467 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:17:33.400 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.400 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:33.400 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.400 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.400 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.400 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.400 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:33.400 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:33.657 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:33.657 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.657 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:33.657 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:33.657 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:33.657 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.657 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.657 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.657 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.657 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.657 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.657 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.657 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.914 00:17:33.914 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.914 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.914 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.171 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.171 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.171 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.171 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.171 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.171 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.171 { 00:17:34.171 "cntlid": 59, 00:17:34.171 "qid": 0, 00:17:34.171 "state": "enabled", 00:17:34.171 "thread": "nvmf_tgt_poll_group_000", 00:17:34.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:34.171 "listen_address": { 00:17:34.171 "trtype": "TCP", 00:17:34.171 "adrfam": "IPv4", 00:17:34.171 "traddr": "10.0.0.2", 00:17:34.171 "trsvcid": "4420" 00:17:34.171 }, 00:17:34.171 "peer_address": { 00:17:34.171 "trtype": "TCP", 00:17:34.171 "adrfam": "IPv4", 00:17:34.171 "traddr": "10.0.0.1", 00:17:34.171 "trsvcid": "51520" 00:17:34.171 }, 00:17:34.171 "auth": { 00:17:34.171 "state": "completed", 00:17:34.171 "digest": "sha384", 00:17:34.171 "dhgroup": "ffdhe2048" 00:17:34.171 } 00:17:34.171 } 00:17:34.171 ]' 00:17:34.171 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.171 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.171 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.429 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:34.429 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.429 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.429 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.429 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.687 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:17:34.687 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:17:35.620 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.620 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:35.620 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.620 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.620 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.620 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.620 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:35.620 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:35.877 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:35.877 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.877 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.877 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:35.877 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:35.877 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.877 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.877 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.877 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.877 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.877 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.877 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.877 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.134 00:17:36.134 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.134 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.134 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.392 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.392 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.392 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.392 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.392 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.392 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.392 { 00:17:36.392 "cntlid": 61, 00:17:36.392 "qid": 0, 00:17:36.392 "state": "enabled", 00:17:36.392 "thread": "nvmf_tgt_poll_group_000", 00:17:36.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:36.392 "listen_address": { 00:17:36.392 "trtype": "TCP", 00:17:36.392 "adrfam": "IPv4", 00:17:36.392 "traddr": "10.0.0.2", 00:17:36.392 "trsvcid": "4420" 00:17:36.392 }, 00:17:36.392 "peer_address": { 00:17:36.392 "trtype": "TCP", 00:17:36.392 "adrfam": "IPv4", 00:17:36.392 "traddr": "10.0.0.1", 00:17:36.392 "trsvcid": "51544" 00:17:36.392 }, 00:17:36.392 "auth": { 00:17:36.392 "state": "completed", 00:17:36.392 "digest": "sha384", 00:17:36.392 "dhgroup": "ffdhe2048" 00:17:36.392 } 00:17:36.392 } 00:17:36.392 ]' 00:17:36.392 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.392 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.392 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.392 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:36.392 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.649 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.649 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.649 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.907 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:17:36.907 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:17:37.838 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.838 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:37.838 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.838 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.838 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.838 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.838 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:37.838 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.095 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:38.095 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.095 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:38.095 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:38.095 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:38.095 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.095 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:38.095 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.095 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.095 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.095 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:38.095 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:38.096 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:38.353 00:17:38.353 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.353 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.353 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.610 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.610 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.610 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.610 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.610 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.610 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.610 { 00:17:38.610 "cntlid": 63, 00:17:38.610 "qid": 0, 00:17:38.610 "state": "enabled", 00:17:38.610 "thread": "nvmf_tgt_poll_group_000", 00:17:38.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:38.610 "listen_address": { 00:17:38.610 "trtype": "TCP", 00:17:38.610 "adrfam": "IPv4", 00:17:38.610 "traddr": "10.0.0.2", 00:17:38.610 "trsvcid": "4420" 00:17:38.610 }, 00:17:38.610 "peer_address": { 00:17:38.610 "trtype": "TCP", 00:17:38.610 "adrfam": "IPv4", 00:17:38.610 "traddr": "10.0.0.1", 00:17:38.610 "trsvcid": "51572" 00:17:38.610 }, 00:17:38.610 "auth": { 00:17:38.610 "state": "completed", 00:17:38.610 "digest": "sha384", 00:17:38.610 "dhgroup": "ffdhe2048" 00:17:38.610 } 00:17:38.610 } 00:17:38.610 ]' 00:17:38.610 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.610 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.610 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.868 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:38.868 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.868 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.868 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.868 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.125 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:17:39.125 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:17:40.061 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.061 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:40.061 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.061 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.061 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.061 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.061 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.061 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:40.061 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:40.364 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:40.364 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.364 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.364 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:40.364 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:40.364 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.364 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.364 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.364 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.364 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.364 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.364 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.364 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.648 00:17:40.648 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.648 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.648 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.905 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.905 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.905 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.905 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.905 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.905 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.905 { 00:17:40.905 "cntlid": 65, 00:17:40.905 "qid": 0, 00:17:40.905 "state": "enabled", 00:17:40.905 "thread": "nvmf_tgt_poll_group_000", 00:17:40.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:40.905 "listen_address": { 00:17:40.905 "trtype": "TCP", 00:17:40.905 "adrfam": "IPv4", 00:17:40.905 "traddr": "10.0.0.2", 00:17:40.905 "trsvcid": "4420" 00:17:40.905 }, 00:17:40.905 "peer_address": { 00:17:40.905 "trtype": "TCP", 00:17:40.905 "adrfam": "IPv4", 00:17:40.905 "traddr": "10.0.0.1", 00:17:40.905 "trsvcid": "42544" 00:17:40.905 }, 00:17:40.905 "auth": { 00:17:40.905 "state": "completed", 00:17:40.905 "digest": "sha384", 00:17:40.905 "dhgroup": "ffdhe3072" 00:17:40.905 } 00:17:40.905 } 00:17:40.905 ]' 00:17:40.905 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.163 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.163 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.163 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:41.163 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.163 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.163 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.163 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.420 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:17:41.420 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:17:42.351 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.351 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:42.351 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.351 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.351 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.352 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.352 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:42.352 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:42.609 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:42.609 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.609 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:42.609 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:42.609 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:42.609 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.609 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.609 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.609 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.609 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.609 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.609 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.609 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.867 00:17:42.867 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.867 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.867 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.432 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.432 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.432 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.432 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.432 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.432 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.432 { 00:17:43.432 "cntlid": 67, 00:17:43.432 "qid": 0, 00:17:43.432 "state": "enabled", 00:17:43.432 "thread": "nvmf_tgt_poll_group_000", 00:17:43.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:43.432 "listen_address": { 00:17:43.432 "trtype": "TCP", 00:17:43.432 "adrfam": "IPv4", 00:17:43.432 "traddr": "10.0.0.2", 00:17:43.432 "trsvcid": "4420" 00:17:43.432 }, 00:17:43.432 "peer_address": { 00:17:43.432 "trtype": "TCP", 00:17:43.432 "adrfam": "IPv4", 00:17:43.432 "traddr": "10.0.0.1", 00:17:43.432 "trsvcid": "42584" 00:17:43.432 }, 00:17:43.432 "auth": { 00:17:43.432 "state": "completed", 00:17:43.432 "digest": "sha384", 00:17:43.432 "dhgroup": "ffdhe3072" 00:17:43.432 } 00:17:43.432 } 00:17:43.432 ]' 00:17:43.432 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.432 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.432 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.432 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:43.432 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.432 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.432 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.432 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.689 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:17:43.689 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:17:44.622 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.622 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:44.622 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.622 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.622 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.622 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.622 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:44.622 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:44.879 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:44.879 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.879 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:44.879 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:44.879 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:44.879 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.879 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.879 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.879 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.879 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.879 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.879 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.879 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.444 00:17:45.444 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.444 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.444 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.702 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.702 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.702 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.702 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.702 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.702 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.702 { 00:17:45.702 "cntlid": 69, 00:17:45.702 "qid": 0, 00:17:45.702 "state": "enabled", 00:17:45.702 "thread": "nvmf_tgt_poll_group_000", 00:17:45.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:45.702 "listen_address": { 00:17:45.702 "trtype": "TCP", 00:17:45.702 "adrfam": "IPv4", 00:17:45.702 "traddr": "10.0.0.2", 00:17:45.702 "trsvcid": "4420" 00:17:45.702 }, 00:17:45.702 "peer_address": { 00:17:45.702 "trtype": "TCP", 00:17:45.702 "adrfam": "IPv4", 00:17:45.702 "traddr": "10.0.0.1", 00:17:45.702 "trsvcid": "42630" 00:17:45.702 }, 00:17:45.702 "auth": { 00:17:45.702 "state": "completed", 00:17:45.702 "digest": "sha384", 00:17:45.702 "dhgroup": "ffdhe3072" 00:17:45.702 } 00:17:45.702 } 00:17:45.702 ]' 00:17:45.702 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.702 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:45.702 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.702 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:45.702 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.702 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.702 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.702 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.960 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:17:45.960 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:17:46.892 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.892 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:46.892 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.892 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.892 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.892 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.892 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:46.892 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:47.149 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:47.149 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.149 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:47.149 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:47.149 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:47.149 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.149 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:47.149 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.149 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.149 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.149 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:47.149 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.150 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.714 00:17:47.714 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.714 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.714 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.972 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.972 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.972 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.972 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.972 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.972 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.972 { 00:17:47.972 "cntlid": 71, 00:17:47.972 "qid": 0, 00:17:47.972 "state": "enabled", 00:17:47.972 "thread": "nvmf_tgt_poll_group_000", 00:17:47.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:47.972 "listen_address": { 00:17:47.972 "trtype": "TCP", 00:17:47.972 "adrfam": "IPv4", 00:17:47.972 "traddr": "10.0.0.2", 00:17:47.972 "trsvcid": "4420" 00:17:47.972 }, 00:17:47.972 "peer_address": { 00:17:47.972 "trtype": "TCP", 00:17:47.972 "adrfam": "IPv4", 00:17:47.972 "traddr": "10.0.0.1", 00:17:47.972 "trsvcid": "42660" 00:17:47.972 }, 00:17:47.972 "auth": { 00:17:47.972 "state": "completed", 00:17:47.972 "digest": "sha384", 00:17:47.972 "dhgroup": "ffdhe3072" 00:17:47.972 } 00:17:47.972 } 00:17:47.972 ]' 00:17:47.972 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.972 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:47.972 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.972 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:47.972 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.972 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.972 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.972 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.230 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:17:48.230 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:17:49.164 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.164 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:49.164 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.164 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.164 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.164 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.164 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.164 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:49.164 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:49.422 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:49.422 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.422 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:49.422 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:49.422 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:49.422 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.422 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.422 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.422 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.422 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.422 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.422 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.422 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.988 00:17:49.988 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.988 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.988 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.245 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.245 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.245 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.245 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.245 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.245 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.245 { 00:17:50.245 "cntlid": 73, 00:17:50.245 "qid": 0, 00:17:50.245 "state": "enabled", 00:17:50.245 "thread": "nvmf_tgt_poll_group_000", 00:17:50.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:50.246 "listen_address": { 00:17:50.246 "trtype": "TCP", 00:17:50.246 "adrfam": "IPv4", 00:17:50.246 "traddr": "10.0.0.2", 00:17:50.246 "trsvcid": "4420" 00:17:50.246 }, 00:17:50.246 "peer_address": { 00:17:50.246 "trtype": "TCP", 00:17:50.246 "adrfam": "IPv4", 00:17:50.246 "traddr": "10.0.0.1", 00:17:50.246 "trsvcid": "46286" 00:17:50.246 }, 00:17:50.246 "auth": { 00:17:50.246 "state": "completed", 00:17:50.246 "digest": "sha384", 00:17:50.246 "dhgroup": "ffdhe4096" 00:17:50.246 } 00:17:50.246 } 00:17:50.246 ]' 00:17:50.246 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.246 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:50.246 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.246 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:50.246 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.246 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.246 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.246 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.503 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:17:50.503 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:17:51.436 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.436 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:51.436 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.436 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.436 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.436 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.436 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:51.436 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:51.694 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:51.694 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.694 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:51.694 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:51.694 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:51.694 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.694 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.694 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.694 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.694 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.694 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.694 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.694 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.259 00:17:52.259 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.259 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.259 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.517 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.517 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.517 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.517 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.517 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.517 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.517 { 00:17:52.517 "cntlid": 75, 00:17:52.517 "qid": 0, 00:17:52.517 "state": "enabled", 00:17:52.517 "thread": "nvmf_tgt_poll_group_000", 00:17:52.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:52.517 "listen_address": { 00:17:52.517 "trtype": "TCP", 00:17:52.517 "adrfam": "IPv4", 00:17:52.517 "traddr": "10.0.0.2", 00:17:52.517 "trsvcid": "4420" 00:17:52.517 }, 00:17:52.517 "peer_address": { 00:17:52.517 "trtype": "TCP", 00:17:52.517 "adrfam": "IPv4", 00:17:52.517 "traddr": "10.0.0.1", 00:17:52.517 "trsvcid": "46304" 00:17:52.517 }, 00:17:52.517 "auth": { 00:17:52.518 "state": "completed", 00:17:52.518 "digest": "sha384", 00:17:52.518 "dhgroup": "ffdhe4096" 00:17:52.518 } 00:17:52.518 } 00:17:52.518 ]' 00:17:52.518 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.518 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:52.518 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.518 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:52.518 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.518 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.518 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.518 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.776 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:17:52.776 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:17:53.709 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.709 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:53.709 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.709 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.709 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.709 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.709 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:53.709 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:54.274 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:54.274 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.274 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:54.274 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:54.274 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:54.274 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.274 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.274 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.274 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.274 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.274 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.274 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.274 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.532 00:17:54.532 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.532 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.532 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.790 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.790 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.790 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.790 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.790 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.790 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.790 { 00:17:54.790 "cntlid": 77, 00:17:54.790 "qid": 0, 00:17:54.790 "state": "enabled", 00:17:54.790 "thread": "nvmf_tgt_poll_group_000", 00:17:54.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:54.790 "listen_address": { 00:17:54.790 "trtype": "TCP", 00:17:54.790 "adrfam": "IPv4", 00:17:54.790 "traddr": "10.0.0.2", 00:17:54.790 "trsvcid": "4420" 00:17:54.790 }, 00:17:54.790 "peer_address": { 00:17:54.790 "trtype": "TCP", 00:17:54.790 "adrfam": "IPv4", 00:17:54.790 "traddr": "10.0.0.1", 00:17:54.790 "trsvcid": "46338" 00:17:54.790 }, 00:17:54.790 "auth": { 00:17:54.790 "state": "completed", 00:17:54.790 "digest": "sha384", 00:17:54.790 "dhgroup": "ffdhe4096" 00:17:54.790 } 00:17:54.790 } 00:17:54.790 ]' 00:17:54.790 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.790 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.790 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.790 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:54.790 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.790 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.790 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.790 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.355 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:17:55.355 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:17:55.919 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.919 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:55.919 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.919 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.176 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.176 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.176 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:56.176 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:56.433 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:56.433 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.433 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:56.433 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:56.433 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:56.433 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.433 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:56.433 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.433 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.433 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.433 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:56.433 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.434 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.690 00:17:56.690 10:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.690 10:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.690 10:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.947 10:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.947 10:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.947 10:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.947 10:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.947 10:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.947 10:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.947 { 00:17:56.947 "cntlid": 79, 00:17:56.947 "qid": 0, 00:17:56.947 "state": "enabled", 00:17:56.947 "thread": "nvmf_tgt_poll_group_000", 00:17:56.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:56.947 "listen_address": { 00:17:56.947 "trtype": "TCP", 00:17:56.947 "adrfam": "IPv4", 00:17:56.947 "traddr": "10.0.0.2", 00:17:56.947 "trsvcid": "4420" 00:17:56.947 }, 00:17:56.947 "peer_address": { 00:17:56.947 "trtype": "TCP", 00:17:56.947 "adrfam": "IPv4", 00:17:56.947 "traddr": "10.0.0.1", 00:17:56.947 "trsvcid": "46370" 00:17:56.947 }, 00:17:56.947 "auth": { 00:17:56.947 "state": "completed", 00:17:56.947 "digest": "sha384", 00:17:56.947 "dhgroup": "ffdhe4096" 00:17:56.947 } 00:17:56.947 } 00:17:56.947 ]' 00:17:56.947 10:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.947 10:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.947 10:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.206 10:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:57.206 10:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.206 10:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.206 10:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.206 10:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.464 10:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:17:57.464 10:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:17:58.397 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.397 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:58.397 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.397 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.397 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.397 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.397 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.397 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:58.397 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:58.655 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:58.655 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.655 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:58.655 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:58.655 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:58.655 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.655 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.655 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.655 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.655 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.655 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.655 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.655 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.219 00:17:59.219 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.220 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.220 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.478 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.478 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.478 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.478 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.478 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.478 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.478 { 00:17:59.478 "cntlid": 81, 00:17:59.478 "qid": 0, 00:17:59.478 "state": "enabled", 00:17:59.478 "thread": "nvmf_tgt_poll_group_000", 00:17:59.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:59.478 "listen_address": { 00:17:59.478 "trtype": "TCP", 00:17:59.478 "adrfam": "IPv4", 00:17:59.478 "traddr": "10.0.0.2", 00:17:59.478 "trsvcid": "4420" 00:17:59.478 }, 00:17:59.478 "peer_address": { 00:17:59.478 "trtype": "TCP", 00:17:59.478 "adrfam": "IPv4", 00:17:59.478 "traddr": "10.0.0.1", 00:17:59.478 "trsvcid": "33362" 00:17:59.478 }, 00:17:59.478 "auth": { 00:17:59.478 "state": "completed", 00:17:59.478 "digest": "sha384", 00:17:59.478 "dhgroup": "ffdhe6144" 00:17:59.478 } 00:17:59.478 } 00:17:59.478 ]' 00:17:59.478 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.478 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.478 10:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.479 10:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:59.479 10:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.479 10:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.479 10:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.479 10:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.737 10:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:17:59.737 10:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:18:00.669 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.669 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:00.669 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.669 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.669 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.669 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.669 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:00.669 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:01.234 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:01.234 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.234 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:01.234 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:01.234 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:01.234 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.234 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.234 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.234 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.234 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.234 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.234 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.235 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.493 00:18:01.493 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.493 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.493 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.758 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.758 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.758 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.758 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.014 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.014 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.014 { 00:18:02.014 "cntlid": 83, 00:18:02.014 "qid": 0, 00:18:02.014 "state": "enabled", 00:18:02.014 "thread": "nvmf_tgt_poll_group_000", 00:18:02.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:02.014 "listen_address": { 00:18:02.014 "trtype": "TCP", 00:18:02.014 "adrfam": "IPv4", 00:18:02.014 "traddr": "10.0.0.2", 00:18:02.014 "trsvcid": "4420" 00:18:02.014 }, 00:18:02.014 "peer_address": { 00:18:02.014 "trtype": "TCP", 00:18:02.014 "adrfam": "IPv4", 00:18:02.014 "traddr": "10.0.0.1", 00:18:02.014 "trsvcid": "33390" 00:18:02.014 }, 00:18:02.014 "auth": { 00:18:02.014 "state": "completed", 00:18:02.014 "digest": "sha384", 00:18:02.014 "dhgroup": "ffdhe6144" 00:18:02.014 } 00:18:02.014 } 00:18:02.014 ]' 00:18:02.014 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.014 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.015 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.015 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:02.015 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.015 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.015 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.015 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.272 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:18:02.272 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:18:03.203 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.203 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:03.203 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.203 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.203 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.203 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.203 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:03.203 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:03.460 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:03.460 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.460 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:03.460 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:03.460 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:03.460 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.460 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.460 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.460 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.460 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.460 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.460 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.460 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.024 00:18:04.024 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.024 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.024 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.282 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.282 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.282 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.282 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.282 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.282 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.282 { 00:18:04.282 "cntlid": 85, 00:18:04.282 "qid": 0, 00:18:04.282 "state": "enabled", 00:18:04.282 "thread": "nvmf_tgt_poll_group_000", 00:18:04.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:04.282 "listen_address": { 00:18:04.282 "trtype": "TCP", 00:18:04.282 "adrfam": "IPv4", 00:18:04.282 "traddr": "10.0.0.2", 00:18:04.282 "trsvcid": "4420" 00:18:04.282 }, 00:18:04.282 "peer_address": { 00:18:04.282 "trtype": "TCP", 00:18:04.282 "adrfam": "IPv4", 00:18:04.282 "traddr": "10.0.0.1", 00:18:04.282 "trsvcid": "33416" 00:18:04.282 }, 00:18:04.282 "auth": { 00:18:04.282 "state": "completed", 00:18:04.282 "digest": "sha384", 00:18:04.282 "dhgroup": "ffdhe6144" 00:18:04.282 } 00:18:04.282 } 00:18:04.282 ]' 00:18:04.282 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.282 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.282 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.539 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:04.539 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.539 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.539 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.539 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.797 10:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:18:04.797 10:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:18:05.732 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.732 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:05.732 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.732 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.732 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.732 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.732 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:05.732 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:05.990 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:05.990 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.990 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:05.990 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:05.990 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:05.990 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.990 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:05.990 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.990 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.990 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.990 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:05.990 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:05.990 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.555 00:18:06.555 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.555 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.555 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.813 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.813 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.813 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.813 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.813 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.813 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.813 { 00:18:06.813 "cntlid": 87, 00:18:06.813 "qid": 0, 00:18:06.813 "state": "enabled", 00:18:06.813 "thread": "nvmf_tgt_poll_group_000", 00:18:06.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:06.813 "listen_address": { 00:18:06.813 "trtype": "TCP", 00:18:06.813 "adrfam": "IPv4", 00:18:06.813 "traddr": "10.0.0.2", 00:18:06.813 "trsvcid": "4420" 00:18:06.813 }, 00:18:06.813 "peer_address": { 00:18:06.813 "trtype": "TCP", 00:18:06.813 "adrfam": "IPv4", 00:18:06.813 "traddr": "10.0.0.1", 00:18:06.813 "trsvcid": "33434" 00:18:06.813 }, 00:18:06.813 "auth": { 00:18:06.813 "state": "completed", 00:18:06.813 "digest": "sha384", 00:18:06.813 "dhgroup": "ffdhe6144" 00:18:06.813 } 00:18:06.813 } 00:18:06.813 ]' 00:18:06.813 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.813 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.813 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.813 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.813 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.813 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.813 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.813 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.071 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:18:07.071 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:18:08.006 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.006 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:08.006 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.006 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.006 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.006 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.006 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.006 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:08.006 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:08.571 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:08.571 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.571 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:08.571 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:08.571 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:08.571 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.571 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.571 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.571 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.571 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.571 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.571 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.571 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.504 00:18:09.504 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.504 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.504 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.504 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.504 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.504 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.504 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.504 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.504 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.504 { 00:18:09.504 "cntlid": 89, 00:18:09.504 "qid": 0, 00:18:09.504 "state": "enabled", 00:18:09.504 "thread": "nvmf_tgt_poll_group_000", 00:18:09.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:09.504 "listen_address": { 00:18:09.504 "trtype": "TCP", 00:18:09.504 "adrfam": "IPv4", 00:18:09.504 "traddr": "10.0.0.2", 00:18:09.504 "trsvcid": "4420" 00:18:09.504 }, 00:18:09.504 "peer_address": { 00:18:09.504 "trtype": "TCP", 00:18:09.504 "adrfam": "IPv4", 00:18:09.504 "traddr": "10.0.0.1", 00:18:09.504 "trsvcid": "51818" 00:18:09.504 }, 00:18:09.504 "auth": { 00:18:09.504 "state": "completed", 00:18:09.504 "digest": "sha384", 00:18:09.504 "dhgroup": "ffdhe8192" 00:18:09.504 } 00:18:09.504 } 00:18:09.504 ]' 00:18:09.504 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.763 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.763 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.763 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:09.763 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.763 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.763 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.763 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.077 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:18:10.077 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:18:11.037 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.037 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:11.037 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.037 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.037 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.037 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.037 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:11.037 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:11.297 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:11.297 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.297 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:11.297 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:11.297 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:11.297 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.297 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.297 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.297 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.297 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.297 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.297 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.297 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.229 00:18:12.229 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.229 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.229 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.229 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.229 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.229 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.229 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.229 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.229 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.229 { 00:18:12.229 "cntlid": 91, 00:18:12.229 "qid": 0, 00:18:12.229 "state": "enabled", 00:18:12.229 "thread": "nvmf_tgt_poll_group_000", 00:18:12.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:12.229 "listen_address": { 00:18:12.229 "trtype": "TCP", 00:18:12.229 "adrfam": "IPv4", 00:18:12.229 "traddr": "10.0.0.2", 00:18:12.229 "trsvcid": "4420" 00:18:12.229 }, 00:18:12.229 "peer_address": { 00:18:12.229 "trtype": "TCP", 00:18:12.229 "adrfam": "IPv4", 00:18:12.229 "traddr": "10.0.0.1", 00:18:12.229 "trsvcid": "51854" 00:18:12.229 }, 00:18:12.229 "auth": { 00:18:12.229 "state": "completed", 00:18:12.229 "digest": "sha384", 00:18:12.229 "dhgroup": "ffdhe8192" 00:18:12.229 } 00:18:12.229 } 00:18:12.229 ]' 00:18:12.229 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.487 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.487 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.487 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:12.487 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.487 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.487 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.487 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.745 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:18:12.745 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:18:13.678 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.678 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:13.678 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.678 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.678 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.678 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.678 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:13.678 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:13.935 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:13.935 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.935 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:13.935 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:13.935 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:13.935 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.935 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.935 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.935 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.935 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.935 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.935 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.935 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.869 00:18:14.869 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.869 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.869 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.127 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.127 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.127 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.127 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.127 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.127 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.127 { 00:18:15.127 "cntlid": 93, 00:18:15.127 "qid": 0, 00:18:15.127 "state": "enabled", 00:18:15.127 "thread": "nvmf_tgt_poll_group_000", 00:18:15.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:15.127 "listen_address": { 00:18:15.127 "trtype": "TCP", 00:18:15.127 "adrfam": "IPv4", 00:18:15.127 "traddr": "10.0.0.2", 00:18:15.127 "trsvcid": "4420" 00:18:15.127 }, 00:18:15.127 "peer_address": { 00:18:15.127 "trtype": "TCP", 00:18:15.127 "adrfam": "IPv4", 00:18:15.127 "traddr": "10.0.0.1", 00:18:15.127 "trsvcid": "51886" 00:18:15.127 }, 00:18:15.127 "auth": { 00:18:15.127 "state": "completed", 00:18:15.127 "digest": "sha384", 00:18:15.127 "dhgroup": "ffdhe8192" 00:18:15.127 } 00:18:15.127 } 00:18:15.127 ]' 00:18:15.127 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.127 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.127 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.127 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.127 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.385 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.385 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.385 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.643 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:18:15.644 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:18:16.577 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.577 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:16.577 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.577 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.577 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.577 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.577 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:16.577 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:16.835 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:16.835 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.835 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:16.835 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:16.835 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:16.835 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.835 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:16.835 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.835 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.835 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.835 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:16.835 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.835 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.768 00:18:17.768 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.768 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.768 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.768 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.768 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.768 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.768 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.768 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.768 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.768 { 00:18:17.768 "cntlid": 95, 00:18:17.768 "qid": 0, 00:18:17.768 "state": "enabled", 00:18:17.768 "thread": "nvmf_tgt_poll_group_000", 00:18:17.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:17.769 "listen_address": { 00:18:17.769 "trtype": "TCP", 00:18:17.769 "adrfam": "IPv4", 00:18:17.769 "traddr": "10.0.0.2", 00:18:17.769 "trsvcid": "4420" 00:18:17.769 }, 00:18:17.769 "peer_address": { 00:18:17.769 "trtype": "TCP", 00:18:17.769 "adrfam": "IPv4", 00:18:17.769 "traddr": "10.0.0.1", 00:18:17.769 "trsvcid": "51922" 00:18:17.769 }, 00:18:17.769 "auth": { 00:18:17.769 "state": "completed", 00:18:17.769 "digest": "sha384", 00:18:17.769 "dhgroup": "ffdhe8192" 00:18:17.769 } 00:18:17.769 } 00:18:17.769 ]' 00:18:17.769 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.026 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.026 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.026 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.026 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.026 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.026 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.026 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.283 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:18:18.283 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:18:19.216 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.216 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:19.216 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.216 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.216 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.216 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:19.216 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:19.216 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.216 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:19.216 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:19.473 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:19.473 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.473 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:19.473 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:19.473 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:19.473 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.473 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.473 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.473 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.473 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.473 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.473 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.473 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.731 00:18:19.731 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.731 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.731 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.989 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.989 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.989 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.989 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.989 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.989 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.989 { 00:18:19.989 "cntlid": 97, 00:18:19.989 "qid": 0, 00:18:19.989 "state": "enabled", 00:18:19.989 "thread": "nvmf_tgt_poll_group_000", 00:18:19.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:19.989 "listen_address": { 00:18:19.989 "trtype": "TCP", 00:18:19.989 "adrfam": "IPv4", 00:18:19.989 "traddr": "10.0.0.2", 00:18:19.989 "trsvcid": "4420" 00:18:19.989 }, 00:18:19.989 "peer_address": { 00:18:19.989 "trtype": "TCP", 00:18:19.989 "adrfam": "IPv4", 00:18:19.989 "traddr": "10.0.0.1", 00:18:19.989 "trsvcid": "58530" 00:18:19.989 }, 00:18:19.989 "auth": { 00:18:19.989 "state": "completed", 00:18:19.989 "digest": "sha512", 00:18:19.989 "dhgroup": "null" 00:18:19.989 } 00:18:19.989 } 00:18:19.989 ]' 00:18:19.989 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.989 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.989 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.248 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:20.248 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.248 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.248 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.248 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.506 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:18:20.506 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:18:21.439 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.439 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:21.440 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.440 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.440 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.440 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.440 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:21.440 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:21.698 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:21.698 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.698 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:21.698 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:21.698 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:21.698 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.698 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.698 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.698 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.698 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.698 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.698 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.698 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.955 00:18:21.955 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.955 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.955 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.212 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.212 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.212 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.212 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.212 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.212 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.212 { 00:18:22.212 "cntlid": 99, 00:18:22.212 "qid": 0, 00:18:22.212 "state": "enabled", 00:18:22.212 "thread": "nvmf_tgt_poll_group_000", 00:18:22.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:22.212 "listen_address": { 00:18:22.212 "trtype": "TCP", 00:18:22.212 "adrfam": "IPv4", 00:18:22.212 "traddr": "10.0.0.2", 00:18:22.212 "trsvcid": "4420" 00:18:22.212 }, 00:18:22.212 "peer_address": { 00:18:22.212 "trtype": "TCP", 00:18:22.212 "adrfam": "IPv4", 00:18:22.212 "traddr": "10.0.0.1", 00:18:22.212 "trsvcid": "58564" 00:18:22.212 }, 00:18:22.212 "auth": { 00:18:22.212 "state": "completed", 00:18:22.212 "digest": "sha512", 00:18:22.212 "dhgroup": "null" 00:18:22.212 } 00:18:22.212 } 00:18:22.212 ]' 00:18:22.212 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.212 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.212 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.470 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:22.470 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.470 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.470 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.470 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.728 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:18:22.728 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:18:23.661 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.661 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:23.661 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.661 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.661 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.661 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.661 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:23.661 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:23.919 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:23.919 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.919 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:23.919 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:23.919 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:23.919 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.919 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.919 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.919 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.919 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.919 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.919 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.920 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.178 00:18:24.178 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.178 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.178 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.436 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.436 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.436 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.436 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.436 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.436 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.436 { 00:18:24.436 "cntlid": 101, 00:18:24.436 "qid": 0, 00:18:24.436 "state": "enabled", 00:18:24.436 "thread": "nvmf_tgt_poll_group_000", 00:18:24.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:24.436 "listen_address": { 00:18:24.436 "trtype": "TCP", 00:18:24.436 "adrfam": "IPv4", 00:18:24.436 "traddr": "10.0.0.2", 00:18:24.436 "trsvcid": "4420" 00:18:24.436 }, 00:18:24.436 "peer_address": { 00:18:24.436 "trtype": "TCP", 00:18:24.436 "adrfam": "IPv4", 00:18:24.436 "traddr": "10.0.0.1", 00:18:24.436 "trsvcid": "58588" 00:18:24.436 }, 00:18:24.436 "auth": { 00:18:24.436 "state": "completed", 00:18:24.436 "digest": "sha512", 00:18:24.436 "dhgroup": "null" 00:18:24.436 } 00:18:24.436 } 00:18:24.436 ]' 00:18:24.436 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.692 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.693 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.693 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:24.693 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.693 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.693 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.693 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.949 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:18:24.949 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:18:25.883 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.883 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:25.883 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.883 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.883 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.883 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.883 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:25.883 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:26.141 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:26.141 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.141 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:26.141 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:26.141 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:26.141 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.141 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:26.141 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.141 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.141 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.141 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:26.141 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.141 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.398 00:18:26.399 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.399 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.399 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.656 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.656 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.656 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.656 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.656 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.656 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.656 { 00:18:26.656 "cntlid": 103, 00:18:26.656 "qid": 0, 00:18:26.656 "state": "enabled", 00:18:26.656 "thread": "nvmf_tgt_poll_group_000", 00:18:26.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:26.656 "listen_address": { 00:18:26.656 "trtype": "TCP", 00:18:26.656 "adrfam": "IPv4", 00:18:26.656 "traddr": "10.0.0.2", 00:18:26.656 "trsvcid": "4420" 00:18:26.656 }, 00:18:26.656 "peer_address": { 00:18:26.656 "trtype": "TCP", 00:18:26.656 "adrfam": "IPv4", 00:18:26.656 "traddr": "10.0.0.1", 00:18:26.656 "trsvcid": "58610" 00:18:26.656 }, 00:18:26.656 "auth": { 00:18:26.656 "state": "completed", 00:18:26.656 "digest": "sha512", 00:18:26.656 "dhgroup": "null" 00:18:26.656 } 00:18:26.656 } 00:18:26.656 ]' 00:18:26.656 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.656 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.656 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.914 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:26.914 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.914 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.914 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.914 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.172 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:18:27.172 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:18:28.104 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.104 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:28.104 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.104 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.104 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.104 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:28.104 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.104 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:28.104 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:28.362 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:28.362 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.362 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:28.362 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:28.362 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:28.362 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.362 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.362 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.362 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.362 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.362 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.362 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.362 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.620 00:18:28.620 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.620 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.620 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.877 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.877 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.877 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.877 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.877 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.877 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.877 { 00:18:28.877 "cntlid": 105, 00:18:28.877 "qid": 0, 00:18:28.877 "state": "enabled", 00:18:28.877 "thread": "nvmf_tgt_poll_group_000", 00:18:28.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:28.877 "listen_address": { 00:18:28.877 "trtype": "TCP", 00:18:28.877 "adrfam": "IPv4", 00:18:28.877 "traddr": "10.0.0.2", 00:18:28.877 "trsvcid": "4420" 00:18:28.877 }, 00:18:28.877 "peer_address": { 00:18:28.877 "trtype": "TCP", 00:18:28.877 "adrfam": "IPv4", 00:18:28.877 "traddr": "10.0.0.1", 00:18:28.877 "trsvcid": "46120" 00:18:28.877 }, 00:18:28.877 "auth": { 00:18:28.877 "state": "completed", 00:18:28.877 "digest": "sha512", 00:18:28.877 "dhgroup": "ffdhe2048" 00:18:28.877 } 00:18:28.877 } 00:18:28.877 ]' 00:18:28.877 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.877 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.877 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.877 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:28.877 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.135 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.135 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.135 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.393 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:18:29.393 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:18:30.324 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.324 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:30.324 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.324 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.324 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.324 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.324 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:30.324 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:30.582 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:30.582 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.582 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:30.582 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:30.582 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:30.582 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.582 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.582 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.582 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.582 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.582 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.582 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.582 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.840 00:18:30.840 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.840 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.840 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.098 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.098 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.098 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.098 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.098 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.098 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.098 { 00:18:31.098 "cntlid": 107, 00:18:31.098 "qid": 0, 00:18:31.098 "state": "enabled", 00:18:31.098 "thread": "nvmf_tgt_poll_group_000", 00:18:31.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:31.098 "listen_address": { 00:18:31.098 "trtype": "TCP", 00:18:31.098 "adrfam": "IPv4", 00:18:31.098 "traddr": "10.0.0.2", 00:18:31.098 "trsvcid": "4420" 00:18:31.098 }, 00:18:31.098 "peer_address": { 00:18:31.098 "trtype": "TCP", 00:18:31.098 "adrfam": "IPv4", 00:18:31.098 "traddr": "10.0.0.1", 00:18:31.098 "trsvcid": "46152" 00:18:31.098 }, 00:18:31.098 "auth": { 00:18:31.098 "state": "completed", 00:18:31.098 "digest": "sha512", 00:18:31.098 "dhgroup": "ffdhe2048" 00:18:31.098 } 00:18:31.098 } 00:18:31.098 ]' 00:18:31.098 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.098 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.098 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.356 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:31.356 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.356 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.356 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.356 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.613 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:18:31.613 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:18:32.545 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.545 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:32.545 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.545 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.545 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.545 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.545 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:32.545 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:32.802 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:32.802 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.802 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:32.802 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:32.802 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:32.802 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.802 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.802 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.802 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.802 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.802 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.802 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.802 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.079 00:18:33.079 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.079 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.079 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.344 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.344 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.344 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.344 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.344 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.344 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.344 { 00:18:33.344 "cntlid": 109, 00:18:33.344 "qid": 0, 00:18:33.344 "state": "enabled", 00:18:33.344 "thread": "nvmf_tgt_poll_group_000", 00:18:33.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:33.344 "listen_address": { 00:18:33.344 "trtype": "TCP", 00:18:33.344 "adrfam": "IPv4", 00:18:33.344 "traddr": "10.0.0.2", 00:18:33.344 "trsvcid": "4420" 00:18:33.344 }, 00:18:33.344 "peer_address": { 00:18:33.344 "trtype": "TCP", 00:18:33.344 "adrfam": "IPv4", 00:18:33.344 "traddr": "10.0.0.1", 00:18:33.344 "trsvcid": "46172" 00:18:33.344 }, 00:18:33.344 "auth": { 00:18:33.344 "state": "completed", 00:18:33.344 "digest": "sha512", 00:18:33.344 "dhgroup": "ffdhe2048" 00:18:33.344 } 00:18:33.344 } 00:18:33.344 ]' 00:18:33.344 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.601 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.601 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.601 10:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:33.601 10:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.601 10:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.601 10:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.601 10:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.859 10:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:18:33.859 10:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:18:34.791 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.791 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:34.791 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.791 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.791 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.791 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.791 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:34.791 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:35.049 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:35.049 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.049 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:35.049 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:35.049 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:35.049 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.049 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:35.049 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.049 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.049 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.049 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:35.049 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:35.049 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:35.307 00:18:35.307 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.307 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.307 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.564 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.564 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.564 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.564 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.822 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.822 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.822 { 00:18:35.822 "cntlid": 111, 00:18:35.822 "qid": 0, 00:18:35.822 "state": "enabled", 00:18:35.822 "thread": "nvmf_tgt_poll_group_000", 00:18:35.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:35.822 "listen_address": { 00:18:35.822 "trtype": "TCP", 00:18:35.822 "adrfam": "IPv4", 00:18:35.822 "traddr": "10.0.0.2", 00:18:35.822 "trsvcid": "4420" 00:18:35.822 }, 00:18:35.822 "peer_address": { 00:18:35.822 "trtype": "TCP", 00:18:35.822 "adrfam": "IPv4", 00:18:35.822 "traddr": "10.0.0.1", 00:18:35.822 "trsvcid": "46184" 00:18:35.822 }, 00:18:35.822 "auth": { 00:18:35.822 "state": "completed", 00:18:35.822 "digest": "sha512", 00:18:35.822 "dhgroup": "ffdhe2048" 00:18:35.822 } 00:18:35.822 } 00:18:35.822 ]' 00:18:35.822 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.822 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.822 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.822 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:35.822 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.822 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.822 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.822 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.080 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:18:36.080 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:18:37.012 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.012 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:37.012 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.012 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.012 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.012 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.012 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.012 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:37.012 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:37.269 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:37.269 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.269 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:37.269 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:37.269 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:37.269 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.269 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.269 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.269 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.269 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.269 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.269 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.269 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.834 00:18:37.834 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:37.834 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:37.834 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.091 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.091 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.091 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.091 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.091 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.091 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.091 { 00:18:38.091 "cntlid": 113, 00:18:38.091 "qid": 0, 00:18:38.091 "state": "enabled", 00:18:38.091 "thread": "nvmf_tgt_poll_group_000", 00:18:38.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:38.091 "listen_address": { 00:18:38.091 "trtype": "TCP", 00:18:38.091 "adrfam": "IPv4", 00:18:38.091 "traddr": "10.0.0.2", 00:18:38.091 "trsvcid": "4420" 00:18:38.091 }, 00:18:38.091 "peer_address": { 00:18:38.091 "trtype": "TCP", 00:18:38.091 "adrfam": "IPv4", 00:18:38.091 "traddr": "10.0.0.1", 00:18:38.091 "trsvcid": "46208" 00:18:38.091 }, 00:18:38.091 "auth": { 00:18:38.091 "state": "completed", 00:18:38.091 "digest": "sha512", 00:18:38.091 "dhgroup": "ffdhe3072" 00:18:38.091 } 00:18:38.091 } 00:18:38.091 ]' 00:18:38.091 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.091 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.091 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.091 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:38.092 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.092 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.092 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.092 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.349 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:18:38.349 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:18:39.281 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.281 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:39.281 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.281 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.281 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.281 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.281 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:39.281 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:39.539 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:39.539 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.539 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:39.539 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:39.539 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:39.539 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.539 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.539 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.539 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.539 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.539 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.539 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.539 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.103 00:18:40.103 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.103 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.103 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.360 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.360 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.360 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.360 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.360 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.360 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.360 { 00:18:40.360 "cntlid": 115, 00:18:40.360 "qid": 0, 00:18:40.360 "state": "enabled", 00:18:40.360 "thread": "nvmf_tgt_poll_group_000", 00:18:40.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:40.360 "listen_address": { 00:18:40.360 "trtype": "TCP", 00:18:40.360 "adrfam": "IPv4", 00:18:40.360 "traddr": "10.0.0.2", 00:18:40.360 "trsvcid": "4420" 00:18:40.360 }, 00:18:40.360 "peer_address": { 00:18:40.360 "trtype": "TCP", 00:18:40.360 "adrfam": "IPv4", 00:18:40.360 "traddr": "10.0.0.1", 00:18:40.360 "trsvcid": "48494" 00:18:40.360 }, 00:18:40.360 "auth": { 00:18:40.360 "state": "completed", 00:18:40.360 "digest": "sha512", 00:18:40.360 "dhgroup": "ffdhe3072" 00:18:40.360 } 00:18:40.360 } 00:18:40.360 ]' 00:18:40.360 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.360 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.360 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.360 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:40.360 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.360 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.360 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.360 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.617 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:18:40.617 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:18:41.552 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.552 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:41.552 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.552 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.552 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.552 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.552 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:41.552 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:41.810 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:41.810 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.810 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:41.810 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:41.810 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:41.810 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.810 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.810 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.810 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.810 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.810 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.810 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.810 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.375 00:18:42.375 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.375 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.375 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.633 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.633 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.633 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.633 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.633 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.633 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.633 { 00:18:42.633 "cntlid": 117, 00:18:42.633 "qid": 0, 00:18:42.633 "state": "enabled", 00:18:42.633 "thread": "nvmf_tgt_poll_group_000", 00:18:42.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:42.633 "listen_address": { 00:18:42.633 "trtype": "TCP", 00:18:42.633 "adrfam": "IPv4", 00:18:42.633 "traddr": "10.0.0.2", 00:18:42.633 "trsvcid": "4420" 00:18:42.633 }, 00:18:42.633 "peer_address": { 00:18:42.633 "trtype": "TCP", 00:18:42.633 "adrfam": "IPv4", 00:18:42.633 "traddr": "10.0.0.1", 00:18:42.633 "trsvcid": "48526" 00:18:42.633 }, 00:18:42.633 "auth": { 00:18:42.633 "state": "completed", 00:18:42.633 "digest": "sha512", 00:18:42.633 "dhgroup": "ffdhe3072" 00:18:42.633 } 00:18:42.633 } 00:18:42.633 ]' 00:18:42.633 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.633 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.633 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.633 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:42.633 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.633 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.633 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.633 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.890 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:18:42.891 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:18:43.824 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.824 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:43.824 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.824 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.824 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.824 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.824 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:43.824 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:44.082 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:44.082 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.082 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:44.082 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:44.082 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:44.082 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.082 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:44.082 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.082 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.082 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.082 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:44.082 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.082 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.340 00:18:44.340 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.340 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.340 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.598 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.598 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.598 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.598 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.855 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.855 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.855 { 00:18:44.855 "cntlid": 119, 00:18:44.855 "qid": 0, 00:18:44.855 "state": "enabled", 00:18:44.855 "thread": "nvmf_tgt_poll_group_000", 00:18:44.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:44.855 "listen_address": { 00:18:44.855 "trtype": "TCP", 00:18:44.855 "adrfam": "IPv4", 00:18:44.855 "traddr": "10.0.0.2", 00:18:44.855 "trsvcid": "4420" 00:18:44.855 }, 00:18:44.855 "peer_address": { 00:18:44.855 "trtype": "TCP", 00:18:44.855 "adrfam": "IPv4", 00:18:44.855 "traddr": "10.0.0.1", 00:18:44.855 "trsvcid": "48550" 00:18:44.855 }, 00:18:44.855 "auth": { 00:18:44.855 "state": "completed", 00:18:44.855 "digest": "sha512", 00:18:44.855 "dhgroup": "ffdhe3072" 00:18:44.855 } 00:18:44.855 } 00:18:44.855 ]' 00:18:44.855 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.855 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.855 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.855 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:44.855 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.855 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.855 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.856 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.113 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:18:45.113 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:18:46.046 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.046 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:46.046 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.046 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.046 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.046 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.046 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.046 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:46.046 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:46.304 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:46.304 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.304 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:46.304 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:46.304 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:46.304 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.304 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.304 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.304 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.304 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.304 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.304 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.304 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.871 00:18:46.871 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.871 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.871 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.129 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.129 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.129 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.129 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.129 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.129 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.129 { 00:18:47.129 "cntlid": 121, 00:18:47.129 "qid": 0, 00:18:47.129 "state": "enabled", 00:18:47.129 "thread": "nvmf_tgt_poll_group_000", 00:18:47.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:47.129 "listen_address": { 00:18:47.129 "trtype": "TCP", 00:18:47.129 "adrfam": "IPv4", 00:18:47.129 "traddr": "10.0.0.2", 00:18:47.129 "trsvcid": "4420" 00:18:47.129 }, 00:18:47.129 "peer_address": { 00:18:47.129 "trtype": "TCP", 00:18:47.129 "adrfam": "IPv4", 00:18:47.129 "traddr": "10.0.0.1", 00:18:47.129 "trsvcid": "48576" 00:18:47.129 }, 00:18:47.129 "auth": { 00:18:47.129 "state": "completed", 00:18:47.129 "digest": "sha512", 00:18:47.129 "dhgroup": "ffdhe4096" 00:18:47.129 } 00:18:47.129 } 00:18:47.129 ]' 00:18:47.129 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.129 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.129 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.129 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:47.129 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.129 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.129 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.129 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.387 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:18:47.387 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:18:48.321 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.321 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:48.321 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.321 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.321 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.321 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.321 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.321 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.579 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:48.579 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.579 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:48.579 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:48.579 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:48.579 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.579 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.579 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.579 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.579 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.579 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.579 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.579 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.143 00:18:49.143 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.143 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.143 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.401 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.401 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.401 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.401 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.401 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.401 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.401 { 00:18:49.401 "cntlid": 123, 00:18:49.401 "qid": 0, 00:18:49.401 "state": "enabled", 00:18:49.401 "thread": "nvmf_tgt_poll_group_000", 00:18:49.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:49.401 "listen_address": { 00:18:49.401 "trtype": "TCP", 00:18:49.401 "adrfam": "IPv4", 00:18:49.401 "traddr": "10.0.0.2", 00:18:49.401 "trsvcid": "4420" 00:18:49.401 }, 00:18:49.401 "peer_address": { 00:18:49.401 "trtype": "TCP", 00:18:49.401 "adrfam": "IPv4", 00:18:49.401 "traddr": "10.0.0.1", 00:18:49.401 "trsvcid": "33660" 00:18:49.401 }, 00:18:49.401 "auth": { 00:18:49.401 "state": "completed", 00:18:49.401 "digest": "sha512", 00:18:49.401 "dhgroup": "ffdhe4096" 00:18:49.401 } 00:18:49.401 } 00:18:49.401 ]' 00:18:49.401 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.401 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.401 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.401 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:49.401 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.401 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.401 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.401 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.658 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:18:49.658 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:18:50.599 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.599 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:50.599 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.599 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.599 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.599 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.599 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:50.599 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:50.857 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:50.857 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.857 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:50.857 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:50.857 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:50.857 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.857 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.857 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.857 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.857 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.857 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.857 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.857 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.421 00:18:51.421 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.421 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.421 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.678 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.678 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.678 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.678 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.678 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.678 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.678 { 00:18:51.678 "cntlid": 125, 00:18:51.678 "qid": 0, 00:18:51.678 "state": "enabled", 00:18:51.678 "thread": "nvmf_tgt_poll_group_000", 00:18:51.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:51.678 "listen_address": { 00:18:51.678 "trtype": "TCP", 00:18:51.678 "adrfam": "IPv4", 00:18:51.678 "traddr": "10.0.0.2", 00:18:51.678 "trsvcid": "4420" 00:18:51.678 }, 00:18:51.678 "peer_address": { 00:18:51.678 "trtype": "TCP", 00:18:51.678 "adrfam": "IPv4", 00:18:51.678 "traddr": "10.0.0.1", 00:18:51.678 "trsvcid": "33698" 00:18:51.678 }, 00:18:51.678 "auth": { 00:18:51.678 "state": "completed", 00:18:51.678 "digest": "sha512", 00:18:51.678 "dhgroup": "ffdhe4096" 00:18:51.678 } 00:18:51.678 } 00:18:51.678 ]' 00:18:51.678 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.678 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:51.678 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.678 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:51.678 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.678 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.678 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.678 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.935 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:18:51.935 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:18:52.866 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.866 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:52.866 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.866 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.866 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.866 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.866 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:52.866 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:53.124 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:53.124 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.124 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:53.124 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:53.124 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:53.124 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.124 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:53.124 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.124 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.381 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.381 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:53.381 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:53.381 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:53.638 00:18:53.638 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.638 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.638 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.895 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.895 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.895 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.895 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.895 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.895 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.895 { 00:18:53.895 "cntlid": 127, 00:18:53.895 "qid": 0, 00:18:53.895 "state": "enabled", 00:18:53.895 "thread": "nvmf_tgt_poll_group_000", 00:18:53.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:53.895 "listen_address": { 00:18:53.895 "trtype": "TCP", 00:18:53.895 "adrfam": "IPv4", 00:18:53.895 "traddr": "10.0.0.2", 00:18:53.895 "trsvcid": "4420" 00:18:53.895 }, 00:18:53.895 "peer_address": { 00:18:53.895 "trtype": "TCP", 00:18:53.895 "adrfam": "IPv4", 00:18:53.895 "traddr": "10.0.0.1", 00:18:53.895 "trsvcid": "33726" 00:18:53.895 }, 00:18:53.895 "auth": { 00:18:53.895 "state": "completed", 00:18:53.895 "digest": "sha512", 00:18:53.895 "dhgroup": "ffdhe4096" 00:18:53.895 } 00:18:53.895 } 00:18:53.895 ]' 00:18:53.895 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.895 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:53.895 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.895 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:53.895 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.153 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.153 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.153 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.409 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:18:54.409 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:18:55.342 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.342 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:55.342 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.342 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.342 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.342 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.342 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.342 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:55.342 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:55.599 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:55.599 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.599 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:55.599 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:55.599 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:55.599 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.599 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.599 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.599 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.599 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.599 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.599 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.600 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.165 00:18:56.165 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.165 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.165 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.422 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.422 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.422 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.422 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.422 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.422 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.422 { 00:18:56.422 "cntlid": 129, 00:18:56.422 "qid": 0, 00:18:56.422 "state": "enabled", 00:18:56.422 "thread": "nvmf_tgt_poll_group_000", 00:18:56.422 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:56.422 "listen_address": { 00:18:56.422 "trtype": "TCP", 00:18:56.422 "adrfam": "IPv4", 00:18:56.422 "traddr": "10.0.0.2", 00:18:56.422 "trsvcid": "4420" 00:18:56.422 }, 00:18:56.422 "peer_address": { 00:18:56.422 "trtype": "TCP", 00:18:56.422 "adrfam": "IPv4", 00:18:56.422 "traddr": "10.0.0.1", 00:18:56.422 "trsvcid": "33746" 00:18:56.422 }, 00:18:56.422 "auth": { 00:18:56.422 "state": "completed", 00:18:56.422 "digest": "sha512", 00:18:56.422 "dhgroup": "ffdhe6144" 00:18:56.422 } 00:18:56.422 } 00:18:56.422 ]' 00:18:56.423 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.423 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:56.423 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.423 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:56.423 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.423 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.423 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.423 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.681 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:18:56.681 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:18:57.613 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.613 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:57.613 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.613 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.613 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.613 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.613 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:57.613 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:57.871 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:57.871 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.871 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:57.871 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:57.871 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:57.871 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.871 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.871 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.871 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.871 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.871 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.871 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.871 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.435 00:18:58.435 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.436 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.436 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.718 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.718 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.718 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.718 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.718 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.718 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.718 { 00:18:58.718 "cntlid": 131, 00:18:58.718 "qid": 0, 00:18:58.718 "state": "enabled", 00:18:58.718 "thread": "nvmf_tgt_poll_group_000", 00:18:58.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:58.718 "listen_address": { 00:18:58.718 "trtype": "TCP", 00:18:58.718 "adrfam": "IPv4", 00:18:58.718 "traddr": "10.0.0.2", 00:18:58.718 "trsvcid": "4420" 00:18:58.718 }, 00:18:58.718 "peer_address": { 00:18:58.718 "trtype": "TCP", 00:18:58.718 "adrfam": "IPv4", 00:18:58.718 "traddr": "10.0.0.1", 00:18:58.718 "trsvcid": "33772" 00:18:58.718 }, 00:18:58.718 "auth": { 00:18:58.718 "state": "completed", 00:18:58.718 "digest": "sha512", 00:18:58.718 "dhgroup": "ffdhe6144" 00:18:58.718 } 00:18:58.718 } 00:18:58.718 ]' 00:18:58.718 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.718 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.718 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.718 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:58.718 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.718 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.718 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.718 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.284 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:18:59.284 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:19:00.215 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.215 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:00.215 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.216 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.216 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.216 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.216 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:00.216 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:00.216 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:00.216 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.216 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:00.216 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:00.216 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:00.216 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.216 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.216 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.216 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.472 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.472 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.472 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.472 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.036 00:19:01.036 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.036 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.036 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.036 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.036 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.036 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.036 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.036 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.036 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.036 { 00:19:01.036 "cntlid": 133, 00:19:01.036 "qid": 0, 00:19:01.036 "state": "enabled", 00:19:01.036 "thread": "nvmf_tgt_poll_group_000", 00:19:01.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:01.036 "listen_address": { 00:19:01.036 "trtype": "TCP", 00:19:01.036 "adrfam": "IPv4", 00:19:01.036 "traddr": "10.0.0.2", 00:19:01.036 "trsvcid": "4420" 00:19:01.036 }, 00:19:01.036 "peer_address": { 00:19:01.036 "trtype": "TCP", 00:19:01.036 "adrfam": "IPv4", 00:19:01.036 "traddr": "10.0.0.1", 00:19:01.036 "trsvcid": "46064" 00:19:01.036 }, 00:19:01.036 "auth": { 00:19:01.036 "state": "completed", 00:19:01.036 "digest": "sha512", 00:19:01.036 "dhgroup": "ffdhe6144" 00:19:01.036 } 00:19:01.036 } 00:19:01.036 ]' 00:19:01.036 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.292 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.292 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.292 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:01.292 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.292 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.292 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.292 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.562 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:19:01.562 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:19:02.490 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.490 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:02.490 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.491 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.491 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.491 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.491 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:02.491 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:02.748 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:02.748 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.748 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:02.748 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:02.748 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:02.748 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.748 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:02.748 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.748 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.748 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.748 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:02.748 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:02.748 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:03.312 00:19:03.312 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.312 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.312 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.569 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.569 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.569 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.569 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.569 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.569 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.569 { 00:19:03.569 "cntlid": 135, 00:19:03.569 "qid": 0, 00:19:03.569 "state": "enabled", 00:19:03.569 "thread": "nvmf_tgt_poll_group_000", 00:19:03.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:03.569 "listen_address": { 00:19:03.569 "trtype": "TCP", 00:19:03.569 "adrfam": "IPv4", 00:19:03.569 "traddr": "10.0.0.2", 00:19:03.569 "trsvcid": "4420" 00:19:03.569 }, 00:19:03.569 "peer_address": { 00:19:03.569 "trtype": "TCP", 00:19:03.569 "adrfam": "IPv4", 00:19:03.569 "traddr": "10.0.0.1", 00:19:03.569 "trsvcid": "46080" 00:19:03.569 }, 00:19:03.569 "auth": { 00:19:03.569 "state": "completed", 00:19:03.569 "digest": "sha512", 00:19:03.569 "dhgroup": "ffdhe6144" 00:19:03.569 } 00:19:03.569 } 00:19:03.569 ]' 00:19:03.569 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.569 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.569 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.569 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:03.569 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.826 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.826 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.826 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.083 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:19:04.083 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:19:05.015 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.015 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:05.015 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.015 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.015 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.015 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.015 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.015 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:05.015 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:05.273 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:05.273 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.273 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:05.273 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:05.273 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:05.273 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.273 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.273 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.273 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.273 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.273 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.273 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.273 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.206 00:19:06.206 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.206 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.206 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.206 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.206 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.206 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.206 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.206 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.206 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.206 { 00:19:06.206 "cntlid": 137, 00:19:06.206 "qid": 0, 00:19:06.206 "state": "enabled", 00:19:06.206 "thread": "nvmf_tgt_poll_group_000", 00:19:06.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:06.206 "listen_address": { 00:19:06.206 "trtype": "TCP", 00:19:06.206 "adrfam": "IPv4", 00:19:06.206 "traddr": "10.0.0.2", 00:19:06.206 "trsvcid": "4420" 00:19:06.206 }, 00:19:06.206 "peer_address": { 00:19:06.206 "trtype": "TCP", 00:19:06.206 "adrfam": "IPv4", 00:19:06.206 "traddr": "10.0.0.1", 00:19:06.206 "trsvcid": "46116" 00:19:06.206 }, 00:19:06.206 "auth": { 00:19:06.206 "state": "completed", 00:19:06.206 "digest": "sha512", 00:19:06.206 "dhgroup": "ffdhe8192" 00:19:06.206 } 00:19:06.206 } 00:19:06.206 ]' 00:19:06.206 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.206 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.206 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.464 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:06.464 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.464 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.464 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.464 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.722 10:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:19:06.722 10:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:19:07.653 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.653 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:07.653 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.653 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.654 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.654 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.654 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:07.654 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:07.911 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:07.911 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.911 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:07.911 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:07.911 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:07.911 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.911 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.911 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.911 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.911 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.911 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.911 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.911 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.843 00:19:08.843 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.843 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.843 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.843 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.843 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.843 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.843 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.843 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.843 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.843 { 00:19:08.843 "cntlid": 139, 00:19:08.843 "qid": 0, 00:19:08.843 "state": "enabled", 00:19:08.843 "thread": "nvmf_tgt_poll_group_000", 00:19:08.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:08.843 "listen_address": { 00:19:08.843 "trtype": "TCP", 00:19:08.843 "adrfam": "IPv4", 00:19:08.843 "traddr": "10.0.0.2", 00:19:08.843 "trsvcid": "4420" 00:19:08.843 }, 00:19:08.843 "peer_address": { 00:19:08.843 "trtype": "TCP", 00:19:08.843 "adrfam": "IPv4", 00:19:08.843 "traddr": "10.0.0.1", 00:19:08.843 "trsvcid": "46140" 00:19:08.843 }, 00:19:08.843 "auth": { 00:19:08.843 "state": "completed", 00:19:08.843 "digest": "sha512", 00:19:08.843 "dhgroup": "ffdhe8192" 00:19:08.843 } 00:19:08.843 } 00:19:08.843 ]' 00:19:08.843 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.101 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.101 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.101 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:09.101 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.101 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.101 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.101 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.358 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:19:09.358 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: --dhchap-ctrl-secret DHHC-1:02:NzQ1NjVkOWU0ZDU0Njk3MzU4ODBhYmU3YjkxYzdkYzFhZGQxMTc4MDBjNzFlNWRitlY8Tw==: 00:19:10.290 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.290 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:10.290 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.290 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.290 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.290 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.290 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:10.290 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:10.547 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:10.547 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.547 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:10.547 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:10.547 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:10.547 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.547 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.547 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.547 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.547 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.547 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.547 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.547 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.479 00:19:11.479 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.479 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.479 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.737 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.737 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.737 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.737 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.737 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.737 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.737 { 00:19:11.737 "cntlid": 141, 00:19:11.737 "qid": 0, 00:19:11.737 "state": "enabled", 00:19:11.737 "thread": "nvmf_tgt_poll_group_000", 00:19:11.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:11.737 "listen_address": { 00:19:11.737 "trtype": "TCP", 00:19:11.737 "adrfam": "IPv4", 00:19:11.737 "traddr": "10.0.0.2", 00:19:11.737 "trsvcid": "4420" 00:19:11.737 }, 00:19:11.737 "peer_address": { 00:19:11.737 "trtype": "TCP", 00:19:11.737 "adrfam": "IPv4", 00:19:11.737 "traddr": "10.0.0.1", 00:19:11.737 "trsvcid": "46864" 00:19:11.737 }, 00:19:11.737 "auth": { 00:19:11.737 "state": "completed", 00:19:11.737 "digest": "sha512", 00:19:11.737 "dhgroup": "ffdhe8192" 00:19:11.737 } 00:19:11.737 } 00:19:11.737 ]' 00:19:11.737 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.737 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.737 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.737 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:11.737 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.737 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.737 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.737 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.995 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:19:11.995 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:01:MWU1YjE5NzlhYmM4YjA2NTEyYjdkOGE1M2IwOTFkN2KH2Akj: 00:19:12.928 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.928 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:12.928 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.928 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.928 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.928 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.928 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:12.928 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:13.186 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:13.186 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.186 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:13.186 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:13.186 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:13.186 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.186 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:13.186 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.186 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.186 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.186 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:13.186 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.186 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:14.119 00:19:14.119 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.119 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.119 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.377 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.377 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.377 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.377 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.377 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.377 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.377 { 00:19:14.377 "cntlid": 143, 00:19:14.377 "qid": 0, 00:19:14.377 "state": "enabled", 00:19:14.377 "thread": "nvmf_tgt_poll_group_000", 00:19:14.377 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:14.377 "listen_address": { 00:19:14.377 "trtype": "TCP", 00:19:14.377 "adrfam": "IPv4", 00:19:14.377 "traddr": "10.0.0.2", 00:19:14.377 "trsvcid": "4420" 00:19:14.377 }, 00:19:14.377 "peer_address": { 00:19:14.377 "trtype": "TCP", 00:19:14.377 "adrfam": "IPv4", 00:19:14.377 "traddr": "10.0.0.1", 00:19:14.377 "trsvcid": "46892" 00:19:14.377 }, 00:19:14.377 "auth": { 00:19:14.377 "state": "completed", 00:19:14.377 "digest": "sha512", 00:19:14.377 "dhgroup": "ffdhe8192" 00:19:14.377 } 00:19:14.377 } 00:19:14.377 ]' 00:19:14.377 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.377 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.377 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.377 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:14.377 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.377 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.377 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.377 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.942 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:19:14.942 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:19:15.874 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.874 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:15.874 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.874 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.874 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.874 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:15.874 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:15.874 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:15.874 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:15.874 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:15.874 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:16.132 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:16.132 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.132 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:16.132 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:16.132 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:16.132 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.132 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.132 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.132 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.132 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.132 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.132 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.132 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.065 00:19:17.065 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.065 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.065 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.065 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.065 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.065 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.065 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.065 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.065 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.065 { 00:19:17.065 "cntlid": 145, 00:19:17.065 "qid": 0, 00:19:17.065 "state": "enabled", 00:19:17.065 "thread": "nvmf_tgt_poll_group_000", 00:19:17.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:17.065 "listen_address": { 00:19:17.065 "trtype": "TCP", 00:19:17.065 "adrfam": "IPv4", 00:19:17.065 "traddr": "10.0.0.2", 00:19:17.065 "trsvcid": "4420" 00:19:17.065 }, 00:19:17.065 "peer_address": { 00:19:17.065 "trtype": "TCP", 00:19:17.065 "adrfam": "IPv4", 00:19:17.065 "traddr": "10.0.0.1", 00:19:17.065 "trsvcid": "46930" 00:19:17.065 }, 00:19:17.065 "auth": { 00:19:17.065 "state": "completed", 00:19:17.065 "digest": "sha512", 00:19:17.065 "dhgroup": "ffdhe8192" 00:19:17.065 } 00:19:17.065 } 00:19:17.065 ]' 00:19:17.065 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.065 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.065 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.323 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:17.323 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.323 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.323 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.323 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.580 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:19:17.581 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Mjc5YjcwYTQwY2UzYTFmNzFlZTQzZDk2NTNkM2QyNGUzMDJmZGI1MDc0Mzk1ZDZm/zwVQg==: --dhchap-ctrl-secret DHHC-1:03:M2UzMTRkMmFhYmUyMWY1ZGQxYTM1ZTk4YzFiZTg2YWEyNDI0NjlhN2JhMWU3MzVlN2ZmNmU1MzExMDk0MDYxYROmnt0=: 00:19:18.513 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.513 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:18.513 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.514 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.514 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.514 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:19:18.514 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.514 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.514 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.514 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:18.514 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:18.514 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:18.514 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:18.514 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.514 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:18.514 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.514 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:18.514 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:18.514 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:19.447 request: 00:19:19.447 { 00:19:19.447 "name": "nvme0", 00:19:19.447 "trtype": "tcp", 00:19:19.447 "traddr": "10.0.0.2", 00:19:19.447 "adrfam": "ipv4", 00:19:19.447 "trsvcid": "4420", 00:19:19.447 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:19.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:19.447 "prchk_reftag": false, 00:19:19.447 "prchk_guard": false, 00:19:19.447 "hdgst": false, 00:19:19.447 "ddgst": false, 00:19:19.447 "dhchap_key": "key2", 00:19:19.447 "allow_unrecognized_csi": false, 00:19:19.447 "method": "bdev_nvme_attach_controller", 00:19:19.447 "req_id": 1 00:19:19.447 } 00:19:19.447 Got JSON-RPC error response 00:19:19.447 response: 00:19:19.447 { 00:19:19.447 "code": -5, 00:19:19.447 "message": "Input/output error" 00:19:19.447 } 00:19:19.447 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:19.447 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:19.447 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:19.447 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:19.447 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:19.447 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.447 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.447 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.447 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.447 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.447 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.447 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.447 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:19.447 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:19.447 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:19.447 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:19.447 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.447 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:19.447 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.447 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:19.447 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:19.448 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:20.014 request: 00:19:20.014 { 00:19:20.014 "name": "nvme0", 00:19:20.014 "trtype": "tcp", 00:19:20.014 "traddr": "10.0.0.2", 00:19:20.014 "adrfam": "ipv4", 00:19:20.014 "trsvcid": "4420", 00:19:20.014 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:20.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:20.014 "prchk_reftag": false, 00:19:20.014 "prchk_guard": false, 00:19:20.014 "hdgst": false, 00:19:20.014 "ddgst": false, 00:19:20.014 "dhchap_key": "key1", 00:19:20.014 "dhchap_ctrlr_key": "ckey2", 00:19:20.014 "allow_unrecognized_csi": false, 00:19:20.014 "method": "bdev_nvme_attach_controller", 00:19:20.014 "req_id": 1 00:19:20.014 } 00:19:20.014 Got JSON-RPC error response 00:19:20.014 response: 00:19:20.014 { 00:19:20.014 "code": -5, 00:19:20.014 "message": "Input/output error" 00:19:20.014 } 00:19:20.014 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:20.014 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:20.014 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:20.014 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:20.014 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:20.014 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.014 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.014 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.014 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:19:20.014 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.014 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.014 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.014 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.014 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:20.014 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.014 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:20.014 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:20.014 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:20.014 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:20.014 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.014 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.015 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.948 request: 00:19:20.948 { 00:19:20.948 "name": "nvme0", 00:19:20.948 "trtype": "tcp", 00:19:20.948 "traddr": "10.0.0.2", 00:19:20.948 "adrfam": "ipv4", 00:19:20.948 "trsvcid": "4420", 00:19:20.948 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:20.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:20.948 "prchk_reftag": false, 00:19:20.948 "prchk_guard": false, 00:19:20.948 "hdgst": false, 00:19:20.948 "ddgst": false, 00:19:20.948 "dhchap_key": "key1", 00:19:20.948 "dhchap_ctrlr_key": "ckey1", 00:19:20.948 "allow_unrecognized_csi": false, 00:19:20.948 "method": "bdev_nvme_attach_controller", 00:19:20.948 "req_id": 1 00:19:20.948 } 00:19:20.948 Got JSON-RPC error response 00:19:20.948 response: 00:19:20.948 { 00:19:20.948 "code": -5, 00:19:20.948 "message": "Input/output error" 00:19:20.948 } 00:19:20.948 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:20.948 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:20.948 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:20.948 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:20.948 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:20.948 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.948 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.948 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.948 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1330499 00:19:20.948 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1330499 ']' 00:19:20.948 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1330499 00:19:20.948 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:20.948 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.948 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1330499 00:19:20.948 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:20.948 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:20.948 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1330499' 00:19:20.948 killing process with pid 1330499 00:19:20.948 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1330499 00:19:20.948 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1330499 00:19:21.206 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:21.206 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:21.206 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:21.206 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.206 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1353209 00:19:21.206 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:21.206 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1353209 00:19:21.206 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1353209 ']' 00:19:21.206 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.206 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.206 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.206 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.206 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.465 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.465 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:21.465 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:21.465 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:21.465 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.465 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.465 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:21.465 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1353209 00:19:21.465 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1353209 ']' 00:19:21.465 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.465 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.465 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.465 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.465 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.726 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.726 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:21.726 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:21.726 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.726 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.983 null0 00:19:21.983 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.983 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:21.983 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vIK 00:19:21.983 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.983 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.983 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.983 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.uwl ]] 00:19:21.983 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uwl 00:19:21.983 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.983 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.983 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.983 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:21.983 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.JKf 00:19:21.983 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.983 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.983 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.983 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Pg1 ]] 00:19:21.983 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Pg1 00:19:21.983 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.983 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.983 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.4Cg 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.TPd ]] 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TPd 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.fzN 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:21.984 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:23.356 nvme0n1 00:19:23.356 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.356 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.356 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.614 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.614 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.614 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.614 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.614 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.614 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.614 { 00:19:23.614 "cntlid": 1, 00:19:23.614 "qid": 0, 00:19:23.614 "state": "enabled", 00:19:23.614 "thread": "nvmf_tgt_poll_group_000", 00:19:23.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:23.614 "listen_address": { 00:19:23.614 "trtype": "TCP", 00:19:23.614 "adrfam": "IPv4", 00:19:23.614 "traddr": "10.0.0.2", 00:19:23.614 "trsvcid": "4420" 00:19:23.614 }, 00:19:23.614 "peer_address": { 00:19:23.614 "trtype": "TCP", 00:19:23.614 "adrfam": "IPv4", 00:19:23.614 "traddr": "10.0.0.1", 00:19:23.614 "trsvcid": "33806" 00:19:23.614 }, 00:19:23.614 "auth": { 00:19:23.614 "state": "completed", 00:19:23.614 "digest": "sha512", 00:19:23.614 "dhgroup": "ffdhe8192" 00:19:23.614 } 00:19:23.614 } 00:19:23.614 ]' 00:19:23.614 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.614 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.614 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.874 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:23.874 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.874 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.874 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.874 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.173 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:19:24.173 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:19:25.130 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.130 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:25.130 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.130 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.130 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.130 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:25.130 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.130 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.130 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.130 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:25.130 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:25.389 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:25.389 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:25.389 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:25.389 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:25.389 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.389 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:25.389 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.389 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:25.389 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:25.389 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:25.647 request: 00:19:25.647 { 00:19:25.647 "name": "nvme0", 00:19:25.647 "trtype": "tcp", 00:19:25.647 "traddr": "10.0.0.2", 00:19:25.647 "adrfam": "ipv4", 00:19:25.647 "trsvcid": "4420", 00:19:25.647 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:25.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:25.647 "prchk_reftag": false, 00:19:25.647 "prchk_guard": false, 00:19:25.647 "hdgst": false, 00:19:25.647 "ddgst": false, 00:19:25.647 "dhchap_key": "key3", 00:19:25.647 "allow_unrecognized_csi": false, 00:19:25.647 "method": "bdev_nvme_attach_controller", 00:19:25.647 "req_id": 1 00:19:25.647 } 00:19:25.647 Got JSON-RPC error response 00:19:25.647 response: 00:19:25.647 { 00:19:25.647 "code": -5, 00:19:25.647 "message": "Input/output error" 00:19:25.647 } 00:19:25.647 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:25.647 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:25.647 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:25.647 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:25.647 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:25.647 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:25.647 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:25.647 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:25.905 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:25.905 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:25.905 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:25.905 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:25.905 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.905 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:25.905 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.905 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:25.905 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:25.905 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:26.163 request: 00:19:26.163 { 00:19:26.163 "name": "nvme0", 00:19:26.163 "trtype": "tcp", 00:19:26.163 "traddr": "10.0.0.2", 00:19:26.163 "adrfam": "ipv4", 00:19:26.163 "trsvcid": "4420", 00:19:26.163 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:26.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:26.163 "prchk_reftag": false, 00:19:26.163 "prchk_guard": false, 00:19:26.163 "hdgst": false, 00:19:26.163 "ddgst": false, 00:19:26.163 "dhchap_key": "key3", 00:19:26.163 "allow_unrecognized_csi": false, 00:19:26.163 "method": "bdev_nvme_attach_controller", 00:19:26.163 "req_id": 1 00:19:26.163 } 00:19:26.163 Got JSON-RPC error response 00:19:26.163 response: 00:19:26.163 { 00:19:26.163 "code": -5, 00:19:26.163 "message": "Input/output error" 00:19:26.163 } 00:19:26.163 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:26.163 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:26.163 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:26.163 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:26.163 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:26.163 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:26.163 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:26.163 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:26.163 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:26.163 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:26.420 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:26.420 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.420 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.420 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.420 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:26.420 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.420 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.420 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.420 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:26.420 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:26.420 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:26.420 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:26.420 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.420 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:26.420 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.420 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:26.420 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:26.420 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:26.985 request: 00:19:26.985 { 00:19:26.985 "name": "nvme0", 00:19:26.985 "trtype": "tcp", 00:19:26.985 "traddr": "10.0.0.2", 00:19:26.985 "adrfam": "ipv4", 00:19:26.985 "trsvcid": "4420", 00:19:26.985 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:26.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:26.986 "prchk_reftag": false, 00:19:26.986 "prchk_guard": false, 00:19:26.986 "hdgst": false, 00:19:26.986 "ddgst": false, 00:19:26.986 "dhchap_key": "key0", 00:19:26.986 "dhchap_ctrlr_key": "key1", 00:19:26.986 "allow_unrecognized_csi": false, 00:19:26.986 "method": "bdev_nvme_attach_controller", 00:19:26.986 "req_id": 1 00:19:26.986 } 00:19:26.986 Got JSON-RPC error response 00:19:26.986 response: 00:19:26.986 { 00:19:26.986 "code": -5, 00:19:26.986 "message": "Input/output error" 00:19:26.986 } 00:19:26.986 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:26.986 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:26.986 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:26.986 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:26.986 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:26.986 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:26.986 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:27.243 nvme0n1 00:19:27.501 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:27.501 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:27.501 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.758 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.758 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.758 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.016 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:19:28.016 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.016 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.016 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.016 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:28.016 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:28.016 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:29.386 nvme0n1 00:19:29.386 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:29.386 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:29.386 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.644 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.644 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:29.644 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.644 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.644 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.644 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:29.644 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.644 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:29.901 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.901 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:19:29.901 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: --dhchap-ctrl-secret DHHC-1:03:MmUyYzY1ODQwMmRlZjJlZDE3MzExZjAzZWE1ZDc1Y2YzYTM2ZThkMTE4ODM0ZDNmYmViMmQ5ZTVlMGMyYzJhM9TBc20=: 00:19:30.834 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:30.834 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:30.834 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:30.834 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:30.834 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:30.834 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:30.834 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:30.834 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.834 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.834 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:30.834 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:30.834 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:30.834 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:30.834 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.834 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:30.834 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.834 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:30.834 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:30.834 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:31.770 request: 00:19:31.770 { 00:19:31.770 "name": "nvme0", 00:19:31.770 "trtype": "tcp", 00:19:31.770 "traddr": "10.0.0.2", 00:19:31.770 "adrfam": "ipv4", 00:19:31.770 "trsvcid": "4420", 00:19:31.770 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:31.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:31.770 "prchk_reftag": false, 00:19:31.770 "prchk_guard": false, 00:19:31.770 "hdgst": false, 00:19:31.770 "ddgst": false, 00:19:31.770 "dhchap_key": "key1", 00:19:31.770 "allow_unrecognized_csi": false, 00:19:31.770 "method": "bdev_nvme_attach_controller", 00:19:31.770 "req_id": 1 00:19:31.770 } 00:19:31.770 Got JSON-RPC error response 00:19:31.770 response: 00:19:31.770 { 00:19:31.770 "code": -5, 00:19:31.770 "message": "Input/output error" 00:19:31.770 } 00:19:31.770 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:31.770 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:31.770 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:31.770 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:31.770 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:31.770 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:31.770 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:33.146 nvme0n1 00:19:33.146 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:33.146 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:33.146 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.405 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.405 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.405 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.663 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:33.663 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.663 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.663 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.663 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:33.663 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:33.663 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:34.228 nvme0n1 00:19:34.228 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:34.228 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:34.228 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.485 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.485 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.485 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.742 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:34.742 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.742 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.742 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.742 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: '' 2s 00:19:34.742 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:34.742 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:34.742 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: 00:19:34.742 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:34.742 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:34.742 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:34.742 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: ]] 00:19:34.742 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NTYxNzViNjM5OGFkOTY5MjRkMzgxMGQ1MjllZWEyNTAXpP3G: 00:19:34.742 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:34.742 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:34.743 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:36.641 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:36.641 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:36.641 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:36.641 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:36.641 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:36.641 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:36.641 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:36.641 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:36.641 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.641 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.641 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.641 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: 2s 00:19:36.641 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:36.641 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:36.641 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:36.641 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: 00:19:36.641 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:36.641 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:36.641 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:36.641 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: ]] 00:19:36.641 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MDc4ZmE2YWFlMTY2NmUyODljYTYyOWE1YmFlNmYwYmVlMjEyMmY1MGJiYTYyZjdl9eXslA==: 00:19:36.641 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:36.641 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:39.166 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:39.166 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:39.166 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:39.166 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:39.166 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:39.166 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:39.166 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:39.166 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.166 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:39.166 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.166 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.166 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.166 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:39.166 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:39.166 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:40.097 nvme0n1 00:19:40.097 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:40.097 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.097 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.097 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.097 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:40.097 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:41.029 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:41.029 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:41.029 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.287 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.287 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:41.287 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.287 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.287 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.287 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:41.287 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:41.544 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:41.544 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:41.544 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.801 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.801 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:41.801 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.801 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.801 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.801 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:41.801 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:41.801 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:41.801 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:41.801 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.801 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:41.801 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.801 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:41.801 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:42.735 request: 00:19:42.735 { 00:19:42.735 "name": "nvme0", 00:19:42.735 "dhchap_key": "key1", 00:19:42.735 "dhchap_ctrlr_key": "key3", 00:19:42.735 "method": "bdev_nvme_set_keys", 00:19:42.735 "req_id": 1 00:19:42.735 } 00:19:42.735 Got JSON-RPC error response 00:19:42.735 response: 00:19:42.735 { 00:19:42.735 "code": -13, 00:19:42.735 "message": "Permission denied" 00:19:42.735 } 00:19:42.735 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:42.735 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:42.735 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:42.735 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:42.735 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:42.735 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.735 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:42.735 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:42.735 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:44.106 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:44.106 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:44.106 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.106 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:44.106 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:44.106 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.106 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.106 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.106 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:44.106 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:44.106 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:45.478 nvme0n1 00:19:45.478 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:45.478 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.478 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.478 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.478 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:45.478 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:45.478 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:45.478 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:45.478 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:45.478 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:45.478 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:45.478 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:45.478 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:46.409 request: 00:19:46.409 { 00:19:46.409 "name": "nvme0", 00:19:46.409 "dhchap_key": "key2", 00:19:46.409 "dhchap_ctrlr_key": "key0", 00:19:46.409 "method": "bdev_nvme_set_keys", 00:19:46.409 "req_id": 1 00:19:46.409 } 00:19:46.409 Got JSON-RPC error response 00:19:46.409 response: 00:19:46.409 { 00:19:46.409 "code": -13, 00:19:46.409 "message": "Permission denied" 00:19:46.409 } 00:19:46.409 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:46.409 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:46.409 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:46.409 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:46.409 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:46.409 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:46.409 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.666 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:46.666 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:47.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:47.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:47.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.856 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:47.856 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:47.856 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:47.856 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1330528 00:19:47.857 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1330528 ']' 00:19:47.857 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1330528 00:19:47.857 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:47.857 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.857 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1330528 00:19:47.857 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:47.857 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:47.857 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1330528' 00:19:47.857 killing process with pid 1330528 00:19:47.857 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1330528 00:19:47.857 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1330528 00:19:48.422 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:48.422 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:48.422 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:48.422 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:48.422 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:48.422 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:48.422 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:48.422 rmmod nvme_tcp 00:19:48.422 rmmod nvme_fabrics 00:19:48.422 rmmod nvme_keyring 00:19:48.422 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:48.422 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:48.422 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:48.422 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1353209 ']' 00:19:48.422 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1353209 00:19:48.422 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1353209 ']' 00:19:48.422 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1353209 00:19:48.422 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:48.422 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.422 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1353209 00:19:48.422 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:48.422 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:48.422 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1353209' 00:19:48.422 killing process with pid 1353209 00:19:48.422 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1353209 00:19:48.422 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1353209 00:19:48.680 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:48.680 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:48.680 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:48.680 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:48.680 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:48.680 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:48.680 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:48.680 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:48.680 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:48.680 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.680 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:48.680 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.585 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:50.585 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.vIK /tmp/spdk.key-sha256.JKf /tmp/spdk.key-sha384.4Cg /tmp/spdk.key-sha512.fzN /tmp/spdk.key-sha512.uwl /tmp/spdk.key-sha384.Pg1 /tmp/spdk.key-sha256.TPd '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:50.585 00:19:50.585 real 3m30.885s 00:19:50.585 user 8m15.391s 00:19:50.585 sys 0m27.801s 00:19:50.585 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:50.585 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.585 ************************************ 00:19:50.585 END TEST nvmf_auth_target 00:19:50.585 ************************************ 00:19:50.585 10:47:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:50.585 10:47:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:50.585 10:47:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:50.585 10:47:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:50.585 10:47:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:50.585 ************************************ 00:19:50.585 START TEST nvmf_bdevio_no_huge 00:19:50.585 ************************************ 00:19:50.585 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:50.843 * Looking for test storage... 00:19:50.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:50.843 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:50.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.844 --rc genhtml_branch_coverage=1 00:19:50.844 --rc genhtml_function_coverage=1 00:19:50.844 --rc genhtml_legend=1 00:19:50.844 --rc geninfo_all_blocks=1 00:19:50.844 --rc geninfo_unexecuted_blocks=1 00:19:50.844 00:19:50.844 ' 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:50.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.844 --rc genhtml_branch_coverage=1 00:19:50.844 --rc genhtml_function_coverage=1 00:19:50.844 --rc genhtml_legend=1 00:19:50.844 --rc geninfo_all_blocks=1 00:19:50.844 --rc geninfo_unexecuted_blocks=1 00:19:50.844 00:19:50.844 ' 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:50.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.844 --rc genhtml_branch_coverage=1 00:19:50.844 --rc genhtml_function_coverage=1 00:19:50.844 --rc genhtml_legend=1 00:19:50.844 --rc geninfo_all_blocks=1 00:19:50.844 --rc geninfo_unexecuted_blocks=1 00:19:50.844 00:19:50.844 ' 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:50.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.844 --rc genhtml_branch_coverage=1 00:19:50.844 --rc genhtml_function_coverage=1 00:19:50.844 --rc genhtml_legend=1 00:19:50.844 --rc geninfo_all_blocks=1 00:19:50.844 --rc geninfo_unexecuted_blocks=1 00:19:50.844 00:19:50.844 ' 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:50.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:50.844 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.845 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:50.845 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.845 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:50.845 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:50.845 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:50.845 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:53.440 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:53.441 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:53.441 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:53.441 Found net devices under 0000:09:00.0: cvl_0_0 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:53.441 Found net devices under 0000:09:00.1: cvl_0_1 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:53.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:53.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:19:53.441 00:19:53.441 --- 10.0.0.2 ping statistics --- 00:19:53.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.441 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:53.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:53.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:19:53.441 00:19:53.441 --- 10.0.0.1 ping statistics --- 00:19:53.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.441 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:53.441 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:53.442 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:53.442 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:53.442 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:53.442 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:53.442 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:53.442 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:53.442 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1358467 00:19:53.442 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:53.442 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1358467 00:19:53.442 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1358467 ']' 00:19:53.442 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.442 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.442 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.442 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.442 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:53.442 [2024-11-19 10:47:40.741501] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:19:53.442 [2024-11-19 10:47:40.741585] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:53.442 [2024-11-19 10:47:40.830039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:53.442 [2024-11-19 10:47:40.890702] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.442 [2024-11-19 10:47:40.890764] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.442 [2024-11-19 10:47:40.890794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.442 [2024-11-19 10:47:40.890806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.442 [2024-11-19 10:47:40.890815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.442 [2024-11-19 10:47:40.891871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:53.442 [2024-11-19 10:47:40.891934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:53.442 [2024-11-19 10:47:40.892002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:53.442 [2024-11-19 10:47:40.892005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:53.442 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.442 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:19:53.442 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:53.442 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:53.442 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:53.442 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.442 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:53.442 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.442 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:53.442 [2024-11-19 10:47:41.044051] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.442 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.442 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:53.442 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.442 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:53.699 Malloc0 00:19:53.699 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.699 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:53.699 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.699 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:53.699 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.699 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:53.699 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.699 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:53.699 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.699 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:53.699 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.699 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:53.699 [2024-11-19 10:47:41.081677] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.699 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.700 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:53.700 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:53.700 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:53.700 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:53.700 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:53.700 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:53.700 { 00:19:53.700 "params": { 00:19:53.700 "name": "Nvme$subsystem", 00:19:53.700 "trtype": "$TEST_TRANSPORT", 00:19:53.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.700 "adrfam": "ipv4", 00:19:53.700 "trsvcid": "$NVMF_PORT", 00:19:53.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.700 "hdgst": ${hdgst:-false}, 00:19:53.700 "ddgst": ${ddgst:-false} 00:19:53.700 }, 00:19:53.700 "method": "bdev_nvme_attach_controller" 00:19:53.700 } 00:19:53.700 EOF 00:19:53.700 )") 00:19:53.700 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:53.700 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:53.700 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:53.700 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:53.700 "params": { 00:19:53.700 "name": "Nvme1", 00:19:53.700 "trtype": "tcp", 00:19:53.700 "traddr": "10.0.0.2", 00:19:53.700 "adrfam": "ipv4", 00:19:53.700 "trsvcid": "4420", 00:19:53.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:53.700 "hdgst": false, 00:19:53.700 "ddgst": false 00:19:53.700 }, 00:19:53.700 "method": "bdev_nvme_attach_controller" 00:19:53.700 }' 00:19:53.700 [2024-11-19 10:47:41.130902] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:19:53.700 [2024-11-19 10:47:41.130981] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1358501 ] 00:19:53.700 [2024-11-19 10:47:41.204090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:53.700 [2024-11-19 10:47:41.267708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.700 [2024-11-19 10:47:41.267759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.700 [2024-11-19 10:47:41.267762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.957 I/O targets: 00:19:53.957 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:53.957 00:19:53.957 00:19:53.957 CUnit - A unit testing framework for C - Version 2.1-3 00:19:53.957 http://cunit.sourceforge.net/ 00:19:53.957 00:19:53.957 00:19:53.957 Suite: bdevio tests on: Nvme1n1 00:19:53.957 Test: blockdev write read block ...passed 00:19:54.215 Test: blockdev write zeroes read block ...passed 00:19:54.215 Test: blockdev write zeroes read no split ...passed 00:19:54.215 Test: blockdev write zeroes read split ...passed 00:19:54.215 Test: blockdev write zeroes read split partial ...passed 00:19:54.215 Test: blockdev reset ...[2024-11-19 10:47:41.660145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:54.215 [2024-11-19 10:47:41.660252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d6e0 (9): Bad file descriptor 00:19:54.215 [2024-11-19 10:47:41.676421] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:54.215 passed 00:19:54.215 Test: blockdev write read 8 blocks ...passed 00:19:54.215 Test: blockdev write read size > 128k ...passed 00:19:54.215 Test: blockdev write read invalid size ...passed 00:19:54.215 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:54.215 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:54.215 Test: blockdev write read max offset ...passed 00:19:54.215 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:54.473 Test: blockdev writev readv 8 blocks ...passed 00:19:54.473 Test: blockdev writev readv 30 x 1block ...passed 00:19:54.473 Test: blockdev writev readv block ...passed 00:19:54.473 Test: blockdev writev readv size > 128k ...passed 00:19:54.473 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:54.473 Test: blockdev comparev and writev ...[2024-11-19 10:47:41.891452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.473 [2024-11-19 10:47:41.891491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.473 [2024-11-19 10:47:41.891516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.473 [2024-11-19 10:47:41.891533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:54.473 [2024-11-19 10:47:41.891859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.473 [2024-11-19 10:47:41.891883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:54.473 [2024-11-19 10:47:41.891905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.473 [2024-11-19 10:47:41.891920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:54.473 [2024-11-19 10:47:41.892251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.473 [2024-11-19 10:47:41.892275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:54.473 [2024-11-19 10:47:41.892296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.473 [2024-11-19 10:47:41.892321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:54.473 [2024-11-19 10:47:41.892652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.473 [2024-11-19 10:47:41.892676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:54.473 [2024-11-19 10:47:41.892697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.473 [2024-11-19 10:47:41.892713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:54.473 passed 00:19:54.473 Test: blockdev nvme passthru rw ...passed 00:19:54.473 Test: blockdev nvme passthru vendor specific ...[2024-11-19 10:47:41.976566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:54.473 [2024-11-19 10:47:41.976594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:54.473 [2024-11-19 10:47:41.976736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:54.473 [2024-11-19 10:47:41.976758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:54.473 [2024-11-19 10:47:41.976890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:54.473 [2024-11-19 10:47:41.976913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:54.473 [2024-11-19 10:47:41.977042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:54.473 [2024-11-19 10:47:41.977066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:54.473 passed 00:19:54.473 Test: blockdev nvme admin passthru ...passed 00:19:54.473 Test: blockdev copy ...passed 00:19:54.473 00:19:54.473 Run Summary: Type Total Ran Passed Failed Inactive 00:19:54.473 suites 1 1 n/a 0 0 00:19:54.473 tests 23 23 23 0 0 00:19:54.473 asserts 152 152 152 0 n/a 00:19:54.473 00:19:54.473 Elapsed time = 1.066 seconds 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:55.040 rmmod nvme_tcp 00:19:55.040 rmmod nvme_fabrics 00:19:55.040 rmmod nvme_keyring 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1358467 ']' 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1358467 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1358467 ']' 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1358467 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1358467 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1358467' 00:19:55.040 killing process with pid 1358467 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1358467 00:19:55.040 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1358467 00:19:55.298 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:55.298 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:55.298 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:55.298 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:55.298 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:55.298 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:55.298 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:55.299 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:55.299 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:55.299 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.299 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:55.299 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.834 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:57.834 00:19:57.834 real 0m6.699s 00:19:57.834 user 0m10.512s 00:19:57.834 sys 0m2.723s 00:19:57.834 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:57.834 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.834 ************************************ 00:19:57.834 END TEST nvmf_bdevio_no_huge 00:19:57.834 ************************************ 00:19:57.834 10:47:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:57.834 10:47:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:57.834 10:47:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:57.834 10:47:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:57.834 ************************************ 00:19:57.834 START TEST nvmf_tls 00:19:57.834 ************************************ 00:19:57.834 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:57.834 * Looking for test storage... 00:19:57.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:57.834 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:57.834 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:19:57.834 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:57.834 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:57.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.834 --rc genhtml_branch_coverage=1 00:19:57.835 --rc genhtml_function_coverage=1 00:19:57.835 --rc genhtml_legend=1 00:19:57.835 --rc geninfo_all_blocks=1 00:19:57.835 --rc geninfo_unexecuted_blocks=1 00:19:57.835 00:19:57.835 ' 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:57.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.835 --rc genhtml_branch_coverage=1 00:19:57.835 --rc genhtml_function_coverage=1 00:19:57.835 --rc genhtml_legend=1 00:19:57.835 --rc geninfo_all_blocks=1 00:19:57.835 --rc geninfo_unexecuted_blocks=1 00:19:57.835 00:19:57.835 ' 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:57.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.835 --rc genhtml_branch_coverage=1 00:19:57.835 --rc genhtml_function_coverage=1 00:19:57.835 --rc genhtml_legend=1 00:19:57.835 --rc geninfo_all_blocks=1 00:19:57.835 --rc geninfo_unexecuted_blocks=1 00:19:57.835 00:19:57.835 ' 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:57.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.835 --rc genhtml_branch_coverage=1 00:19:57.835 --rc genhtml_function_coverage=1 00:19:57.835 --rc genhtml_legend=1 00:19:57.835 --rc geninfo_all_blocks=1 00:19:57.835 --rc geninfo_unexecuted_blocks=1 00:19:57.835 00:19:57.835 ' 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:57.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:57.835 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:59.737 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:59.737 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:59.737 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:59.738 Found net devices under 0000:09:00.0: cvl_0_0 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:59.738 Found net devices under 0000:09:00.1: cvl_0_1 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:59.738 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:59.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:19:59.996 00:19:59.996 --- 10.0.0.2 ping statistics --- 00:19:59.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.996 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:59.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:19:59.996 00:19:59.996 --- 10.0.0.1 ping statistics --- 00:19:59.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.996 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1360706 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1360706 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1360706 ']' 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.996 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.996 [2024-11-19 10:47:47.486778] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:19:59.996 [2024-11-19 10:47:47.486861] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.996 [2024-11-19 10:47:47.559699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.996 [2024-11-19 10:47:47.616017] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.996 [2024-11-19 10:47:47.616083] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.996 [2024-11-19 10:47:47.616098] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.996 [2024-11-19 10:47:47.616125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.996 [2024-11-19 10:47:47.616136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.996 [2024-11-19 10:47:47.616776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.254 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.254 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:00.254 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:00.254 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:00.254 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.254 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.254 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:00.254 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:00.511 true 00:20:00.511 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:00.511 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:00.770 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:00.770 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:00.770 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:01.037 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:01.037 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:01.295 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:01.295 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:01.295 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:01.552 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:01.552 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:01.809 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:01.809 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:01.809 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:01.809 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:02.066 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:02.066 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:02.066 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:02.323 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:02.323 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:02.888 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:02.888 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:02.888 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:02.888 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:02.888 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.kA2XIOh4gz 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.nVpHIgwnoG 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.kA2XIOh4gz 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.nVpHIgwnoG 00:20:03.453 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:03.711 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:03.969 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.kA2XIOh4gz 00:20:03.969 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.kA2XIOh4gz 00:20:03.969 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:04.227 [2024-11-19 10:47:51.770069] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.227 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:04.484 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:04.742 [2024-11-19 10:47:52.303534] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:04.742 [2024-11-19 10:47:52.303782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.742 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:05.308 malloc0 00:20:05.308 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:05.308 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.kA2XIOh4gz 00:20:05.873 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:05.873 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.kA2XIOh4gz 00:20:18.064 Initializing NVMe Controllers 00:20:18.064 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:18.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:18.064 Initialization complete. Launching workers. 00:20:18.064 ======================================================== 00:20:18.064 Latency(us) 00:20:18.064 Device Information : IOPS MiB/s Average min max 00:20:18.064 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8581.79 33.52 7459.64 1078.40 8899.68 00:20:18.064 ======================================================== 00:20:18.064 Total : 8581.79 33.52 7459.64 1078.40 8899.68 00:20:18.064 00:20:18.064 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kA2XIOh4gz 00:20:18.064 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:18.064 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:18.064 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:18.064 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kA2XIOh4gz 00:20:18.064 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:18.064 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1362721 00:20:18.064 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:18.064 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:18.064 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1362721 /var/tmp/bdevperf.sock 00:20:18.064 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1362721 ']' 00:20:18.064 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.064 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.064 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.064 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.064 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.064 [2024-11-19 10:48:03.618108] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:20:18.064 [2024-11-19 10:48:03.618189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1362721 ] 00:20:18.064 [2024-11-19 10:48:03.682426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.064 [2024-11-19 10:48:03.739677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.064 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.064 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:18.064 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kA2XIOh4gz 00:20:18.064 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:18.064 [2024-11-19 10:48:04.390536] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:18.064 TLSTESTn1 00:20:18.065 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:18.065 Running I/O for 10 seconds... 00:20:18.998 3524.00 IOPS, 13.77 MiB/s [2024-11-19T09:48:07.992Z] 3561.00 IOPS, 13.91 MiB/s [2024-11-19T09:48:08.923Z] 3581.67 IOPS, 13.99 MiB/s [2024-11-19T09:48:09.855Z] 3577.50 IOPS, 13.97 MiB/s [2024-11-19T09:48:10.787Z] 3570.20 IOPS, 13.95 MiB/s [2024-11-19T09:48:11.718Z] 3582.17 IOPS, 13.99 MiB/s [2024-11-19T09:48:12.651Z] 3601.71 IOPS, 14.07 MiB/s [2024-11-19T09:48:14.022Z] 3597.25 IOPS, 14.05 MiB/s [2024-11-19T09:48:14.956Z] 3602.22 IOPS, 14.07 MiB/s [2024-11-19T09:48:14.956Z] 3611.50 IOPS, 14.11 MiB/s 00:20:27.333 Latency(us) 00:20:27.333 [2024-11-19T09:48:14.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.333 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:27.333 Verification LBA range: start 0x0 length 0x2000 00:20:27.333 TLSTESTn1 : 10.02 3617.36 14.13 0.00 0.00 35327.59 7087.60 30292.20 00:20:27.333 [2024-11-19T09:48:14.956Z] =================================================================================================================== 00:20:27.333 [2024-11-19T09:48:14.956Z] Total : 3617.36 14.13 0.00 0.00 35327.59 7087.60 30292.20 00:20:27.333 { 00:20:27.333 "results": [ 00:20:27.333 { 00:20:27.333 "job": "TLSTESTn1", 00:20:27.333 "core_mask": "0x4", 00:20:27.333 "workload": "verify", 00:20:27.333 "status": "finished", 00:20:27.333 "verify_range": { 00:20:27.333 "start": 0, 00:20:27.333 "length": 8192 00:20:27.333 }, 00:20:27.333 "queue_depth": 128, 00:20:27.333 "io_size": 4096, 00:20:27.333 "runtime": 10.019173, 00:20:27.333 "iops": 3617.3644271837607, 00:20:27.333 "mibps": 14.130329793686565, 00:20:27.333 "io_failed": 0, 00:20:27.333 "io_timeout": 0, 00:20:27.333 "avg_latency_us": 35327.58550729081, 00:20:27.333 "min_latency_us": 7087.597037037037, 00:20:27.333 "max_latency_us": 30292.195555555554 00:20:27.333 } 00:20:27.333 ], 00:20:27.333 "core_count": 1 00:20:27.333 } 00:20:27.333 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:27.333 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1362721 00:20:27.333 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1362721 ']' 00:20:27.333 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1362721 00:20:27.333 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:27.333 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.333 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1362721 00:20:27.333 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:27.333 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:27.333 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1362721' 00:20:27.333 killing process with pid 1362721 00:20:27.333 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1362721 00:20:27.333 Received shutdown signal, test time was about 10.000000 seconds 00:20:27.333 00:20:27.333 Latency(us) 00:20:27.333 [2024-11-19T09:48:14.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.333 [2024-11-19T09:48:14.956Z] =================================================================================================================== 00:20:27.333 [2024-11-19T09:48:14.956Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:27.333 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1362721 00:20:27.333 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nVpHIgwnoG 00:20:27.333 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:27.333 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nVpHIgwnoG 00:20:27.333 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:27.334 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:27.334 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:27.334 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:27.334 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nVpHIgwnoG 00:20:27.334 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:27.334 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:27.334 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:27.334 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.nVpHIgwnoG 00:20:27.334 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:27.334 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1364543 00:20:27.334 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:27.334 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:27.334 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1364543 /var/tmp/bdevperf.sock 00:20:27.334 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1364543 ']' 00:20:27.334 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.334 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:27.334 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.334 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:27.334 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.593 [2024-11-19 10:48:14.962490] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:20:27.593 [2024-11-19 10:48:14.962576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1364543 ] 00:20:27.593 [2024-11-19 10:48:15.029100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.593 [2024-11-19 10:48:15.087337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.593 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.593 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:27.593 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nVpHIgwnoG 00:20:27.853 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:28.145 [2024-11-19 10:48:15.709028] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:28.145 [2024-11-19 10:48:15.720064] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:28.145 [2024-11-19 10:48:15.720272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2f2c0 (107): Transport endpoint is not connected 00:20:28.145 [2024-11-19 10:48:15.721262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2f2c0 (9): Bad file descriptor 00:20:28.145 [2024-11-19 10:48:15.722262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:28.145 [2024-11-19 10:48:15.722296] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:28.145 [2024-11-19 10:48:15.722320] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:28.145 [2024-11-19 10:48:15.722340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:28.145 request: 00:20:28.145 { 00:20:28.145 "name": "TLSTEST", 00:20:28.145 "trtype": "tcp", 00:20:28.145 "traddr": "10.0.0.2", 00:20:28.145 "adrfam": "ipv4", 00:20:28.145 "trsvcid": "4420", 00:20:28.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.145 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:28.145 "prchk_reftag": false, 00:20:28.145 "prchk_guard": false, 00:20:28.145 "hdgst": false, 00:20:28.145 "ddgst": false, 00:20:28.145 "psk": "key0", 00:20:28.145 "allow_unrecognized_csi": false, 00:20:28.145 "method": "bdev_nvme_attach_controller", 00:20:28.145 "req_id": 1 00:20:28.145 } 00:20:28.145 Got JSON-RPC error response 00:20:28.145 response: 00:20:28.145 { 00:20:28.145 "code": -5, 00:20:28.145 "message": "Input/output error" 00:20:28.145 } 00:20:28.145 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1364543 00:20:28.145 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1364543 ']' 00:20:28.145 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1364543 00:20:28.426 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:28.426 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:28.426 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1364543 00:20:28.426 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:28.426 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:28.426 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1364543' 00:20:28.426 killing process with pid 1364543 00:20:28.426 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1364543 00:20:28.426 Received shutdown signal, test time was about 10.000000 seconds 00:20:28.426 00:20:28.426 Latency(us) 00:20:28.426 [2024-11-19T09:48:16.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.426 [2024-11-19T09:48:16.049Z] =================================================================================================================== 00:20:28.426 [2024-11-19T09:48:16.049Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:28.426 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1364543 00:20:28.426 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:28.426 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:28.426 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:28.426 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:28.426 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:28.427 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kA2XIOh4gz 00:20:28.427 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:28.427 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kA2XIOh4gz 00:20:28.427 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:28.427 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:28.427 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:28.427 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:28.427 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kA2XIOh4gz 00:20:28.427 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:28.427 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:28.427 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:28.427 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kA2XIOh4gz 00:20:28.427 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:28.427 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1364684 00:20:28.427 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:28.427 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:28.427 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1364684 /var/tmp/bdevperf.sock 00:20:28.427 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1364684 ']' 00:20:28.427 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:28.427 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.427 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:28.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:28.427 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.427 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.685 [2024-11-19 10:48:16.054240] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:20:28.685 [2024-11-19 10:48:16.054365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1364684 ] 00:20:28.685 [2024-11-19 10:48:16.119235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.685 [2024-11-19 10:48:16.174062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:28.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kA2XIOh4gz 00:20:28.942 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:29.200 [2024-11-19 10:48:16.800349] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:29.200 [2024-11-19 10:48:16.806014] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:29.200 [2024-11-19 10:48:16.806048] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:29.200 [2024-11-19 10:48:16.806103] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:29.200 [2024-11-19 10:48:16.806583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1e2c0 (107): Transport endpoint is not connected 00:20:29.200 [2024-11-19 10:48:16.807571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1e2c0 (9): Bad file descriptor 00:20:29.200 [2024-11-19 10:48:16.808570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:29.200 [2024-11-19 10:48:16.808594] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:29.200 [2024-11-19 10:48:16.808622] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:29.200 [2024-11-19 10:48:16.808640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:29.200 request: 00:20:29.200 { 00:20:29.200 "name": "TLSTEST", 00:20:29.200 "trtype": "tcp", 00:20:29.200 "traddr": "10.0.0.2", 00:20:29.200 "adrfam": "ipv4", 00:20:29.200 "trsvcid": "4420", 00:20:29.200 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.200 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:29.200 "prchk_reftag": false, 00:20:29.200 "prchk_guard": false, 00:20:29.200 "hdgst": false, 00:20:29.200 "ddgst": false, 00:20:29.200 "psk": "key0", 00:20:29.200 "allow_unrecognized_csi": false, 00:20:29.200 "method": "bdev_nvme_attach_controller", 00:20:29.200 "req_id": 1 00:20:29.200 } 00:20:29.200 Got JSON-RPC error response 00:20:29.200 response: 00:20:29.200 { 00:20:29.200 "code": -5, 00:20:29.200 "message": "Input/output error" 00:20:29.200 } 00:20:29.458 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1364684 00:20:29.458 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1364684 ']' 00:20:29.458 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1364684 00:20:29.458 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:29.458 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.458 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1364684 00:20:29.458 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:29.458 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:29.458 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1364684' 00:20:29.458 killing process with pid 1364684 00:20:29.458 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1364684 00:20:29.458 Received shutdown signal, test time was about 10.000000 seconds 00:20:29.458 00:20:29.458 Latency(us) 00:20:29.458 [2024-11-19T09:48:17.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.458 [2024-11-19T09:48:17.081Z] =================================================================================================================== 00:20:29.458 [2024-11-19T09:48:17.082Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:29.459 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1364684 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kA2XIOh4gz 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kA2XIOh4gz 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kA2XIOh4gz 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kA2XIOh4gz 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1364825 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1364825 /var/tmp/bdevperf.sock 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1364825 ']' 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:29.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.459 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.717 [2024-11-19 10:48:17.113450] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:20:29.717 [2024-11-19 10:48:17.113534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1364825 ] 00:20:29.717 [2024-11-19 10:48:17.179123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.717 [2024-11-19 10:48:17.235795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.974 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.974 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:29.974 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kA2XIOh4gz 00:20:30.231 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:30.490 [2024-11-19 10:48:17.858238] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:30.490 [2024-11-19 10:48:17.865507] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:30.490 [2024-11-19 10:48:17.865540] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:30.490 [2024-11-19 10:48:17.865594] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:30.490 [2024-11-19 10:48:17.866249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8922c0 (107): Transport endpoint is not connected 00:20:30.490 [2024-11-19 10:48:17.867238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8922c0 (9): Bad file descriptor 00:20:30.490 [2024-11-19 10:48:17.868238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:30.490 [2024-11-19 10:48:17.868258] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:30.490 [2024-11-19 10:48:17.868286] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:30.490 [2024-11-19 10:48:17.868310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:30.490 request: 00:20:30.490 { 00:20:30.490 "name": "TLSTEST", 00:20:30.490 "trtype": "tcp", 00:20:30.490 "traddr": "10.0.0.2", 00:20:30.490 "adrfam": "ipv4", 00:20:30.490 "trsvcid": "4420", 00:20:30.490 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:30.490 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:30.490 "prchk_reftag": false, 00:20:30.490 "prchk_guard": false, 00:20:30.490 "hdgst": false, 00:20:30.490 "ddgst": false, 00:20:30.490 "psk": "key0", 00:20:30.490 "allow_unrecognized_csi": false, 00:20:30.490 "method": "bdev_nvme_attach_controller", 00:20:30.490 "req_id": 1 00:20:30.490 } 00:20:30.490 Got JSON-RPC error response 00:20:30.490 response: 00:20:30.490 { 00:20:30.490 "code": -5, 00:20:30.490 "message": "Input/output error" 00:20:30.490 } 00:20:30.490 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1364825 00:20:30.490 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1364825 ']' 00:20:30.490 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1364825 00:20:30.490 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:30.490 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.490 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1364825 00:20:30.490 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:30.490 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:30.490 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1364825' 00:20:30.490 killing process with pid 1364825 00:20:30.490 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1364825 00:20:30.490 Received shutdown signal, test time was about 10.000000 seconds 00:20:30.490 00:20:30.490 Latency(us) 00:20:30.490 [2024-11-19T09:48:18.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.490 [2024-11-19T09:48:18.113Z] =================================================================================================================== 00:20:30.490 [2024-11-19T09:48:18.113Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:30.490 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1364825 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1364967 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1364967 /var/tmp/bdevperf.sock 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1364967 ']' 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.748 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.749 [2024-11-19 10:48:18.180091] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:20:30.749 [2024-11-19 10:48:18.180173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1364967 ] 00:20:30.749 [2024-11-19 10:48:18.246950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.749 [2024-11-19 10:48:18.305251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:31.006 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:31.006 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:31.006 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:31.263 [2024-11-19 10:48:18.670138] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:31.263 [2024-11-19 10:48:18.670180] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:31.263 request: 00:20:31.263 { 00:20:31.263 "name": "key0", 00:20:31.263 "path": "", 00:20:31.263 "method": "keyring_file_add_key", 00:20:31.263 "req_id": 1 00:20:31.263 } 00:20:31.263 Got JSON-RPC error response 00:20:31.263 response: 00:20:31.263 { 00:20:31.263 "code": -1, 00:20:31.263 "message": "Operation not permitted" 00:20:31.263 } 00:20:31.263 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:31.521 [2024-11-19 10:48:18.938974] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:31.521 [2024-11-19 10:48:18.939029] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:31.521 request: 00:20:31.521 { 00:20:31.521 "name": "TLSTEST", 00:20:31.521 "trtype": "tcp", 00:20:31.521 "traddr": "10.0.0.2", 00:20:31.521 "adrfam": "ipv4", 00:20:31.521 "trsvcid": "4420", 00:20:31.521 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.521 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:31.521 "prchk_reftag": false, 00:20:31.521 "prchk_guard": false, 00:20:31.521 "hdgst": false, 00:20:31.521 "ddgst": false, 00:20:31.521 "psk": "key0", 00:20:31.521 "allow_unrecognized_csi": false, 00:20:31.521 "method": "bdev_nvme_attach_controller", 00:20:31.521 "req_id": 1 00:20:31.521 } 00:20:31.521 Got JSON-RPC error response 00:20:31.521 response: 00:20:31.521 { 00:20:31.521 "code": -126, 00:20:31.521 "message": "Required key not available" 00:20:31.521 } 00:20:31.521 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1364967 00:20:31.521 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1364967 ']' 00:20:31.521 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1364967 00:20:31.521 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:31.521 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.521 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1364967 00:20:31.521 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:31.521 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:31.521 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1364967' 00:20:31.521 killing process with pid 1364967 00:20:31.521 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1364967 00:20:31.521 Received shutdown signal, test time was about 10.000000 seconds 00:20:31.521 00:20:31.521 Latency(us) 00:20:31.521 [2024-11-19T09:48:19.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.521 [2024-11-19T09:48:19.144Z] =================================================================================================================== 00:20:31.521 [2024-11-19T09:48:19.144Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:31.521 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1364967 00:20:31.778 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:31.778 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:31.778 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:31.778 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:31.778 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:31.778 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1360706 00:20:31.778 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1360706 ']' 00:20:31.778 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1360706 00:20:31.778 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:31.778 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.778 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1360706 00:20:31.778 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:31.778 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:31.778 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1360706' 00:20:31.778 killing process with pid 1360706 00:20:31.778 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1360706 00:20:31.778 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1360706 00:20:32.038 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:32.038 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:32.038 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:32.038 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:32.038 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:32.038 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:32.038 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:32.038 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:32.038 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:32.039 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.QihNeMcnYO 00:20:32.039 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:32.039 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.QihNeMcnYO 00:20:32.039 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:32.039 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:32.039 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:32.039 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.039 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1365120 00:20:32.039 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:32.039 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1365120 00:20:32.039 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1365120 ']' 00:20:32.039 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.039 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:32.039 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.039 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:32.039 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.039 [2024-11-19 10:48:19.594320] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:20:32.039 [2024-11-19 10:48:19.594425] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.297 [2024-11-19 10:48:19.666418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.297 [2024-11-19 10:48:19.723438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.297 [2024-11-19 10:48:19.723485] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.297 [2024-11-19 10:48:19.723500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:32.297 [2024-11-19 10:48:19.723513] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:32.297 [2024-11-19 10:48:19.723523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.297 [2024-11-19 10:48:19.724087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.297 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:32.297 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:32.297 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:32.297 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:32.297 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.297 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:32.297 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.QihNeMcnYO 00:20:32.297 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.QihNeMcnYO 00:20:32.297 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:32.555 [2024-11-19 10:48:20.102959] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.555 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:32.813 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:33.070 [2024-11-19 10:48:20.652474] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:33.070 [2024-11-19 10:48:20.652715] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.070 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:33.328 malloc0 00:20:33.328 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:33.585 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.QihNeMcnYO 00:20:34.150 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:34.150 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QihNeMcnYO 00:20:34.150 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:34.150 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:34.150 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:34.150 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QihNeMcnYO 00:20:34.150 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:34.150 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1365410 00:20:34.150 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:34.150 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:34.150 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1365410 /var/tmp/bdevperf.sock 00:20:34.150 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1365410 ']' 00:20:34.150 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.150 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.150 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.150 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.150 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.408 [2024-11-19 10:48:21.780077] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:20:34.408 [2024-11-19 10:48:21.780150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1365410 ] 00:20:34.408 [2024-11-19 10:48:21.845063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.408 [2024-11-19 10:48:21.902368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.408 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.408 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:34.408 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QihNeMcnYO 00:20:34.973 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:34.973 [2024-11-19 10:48:22.554565] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.231 TLSTESTn1 00:20:35.231 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:35.231 Running I/O for 10 seconds... 00:20:37.534 3020.00 IOPS, 11.80 MiB/s [2024-11-19T09:48:26.089Z] 3141.00 IOPS, 12.27 MiB/s [2024-11-19T09:48:27.021Z] 3172.67 IOPS, 12.39 MiB/s [2024-11-19T09:48:27.954Z] 3192.25 IOPS, 12.47 MiB/s [2024-11-19T09:48:28.886Z] 3203.60 IOPS, 12.51 MiB/s [2024-11-19T09:48:29.818Z] 3215.33 IOPS, 12.56 MiB/s [2024-11-19T09:48:31.191Z] 3221.00 IOPS, 12.58 MiB/s [2024-11-19T09:48:32.122Z] 3225.12 IOPS, 12.60 MiB/s [2024-11-19T09:48:33.054Z] 3227.22 IOPS, 12.61 MiB/s [2024-11-19T09:48:33.054Z] 3226.60 IOPS, 12.60 MiB/s 00:20:45.431 Latency(us) 00:20:45.431 [2024-11-19T09:48:33.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.431 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:45.431 Verification LBA range: start 0x0 length 0x2000 00:20:45.431 TLSTESTn1 : 10.02 3232.36 12.63 0.00 0.00 39532.43 9126.49 37282.70 00:20:45.431 [2024-11-19T09:48:33.054Z] =================================================================================================================== 00:20:45.431 [2024-11-19T09:48:33.054Z] Total : 3232.36 12.63 0.00 0.00 39532.43 9126.49 37282.70 00:20:45.431 { 00:20:45.431 "results": [ 00:20:45.431 { 00:20:45.431 "job": "TLSTESTn1", 00:20:45.431 "core_mask": "0x4", 00:20:45.431 "workload": "verify", 00:20:45.431 "status": "finished", 00:20:45.431 "verify_range": { 00:20:45.431 "start": 0, 00:20:45.431 "length": 8192 00:20:45.431 }, 00:20:45.431 "queue_depth": 128, 00:20:45.431 "io_size": 4096, 00:20:45.431 "runtime": 10.021481, 00:20:45.431 "iops": 3232.3565748415826, 00:20:45.431 "mibps": 12.626392870474932, 00:20:45.431 "io_failed": 0, 00:20:45.431 "io_timeout": 0, 00:20:45.431 "avg_latency_us": 39532.42754458839, 00:20:45.431 "min_latency_us": 9126.494814814814, 00:20:45.431 "max_latency_us": 37282.70222222222 00:20:45.431 } 00:20:45.431 ], 00:20:45.431 "core_count": 1 00:20:45.431 } 00:20:45.431 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:45.431 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1365410 00:20:45.431 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1365410 ']' 00:20:45.431 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1365410 00:20:45.431 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:45.431 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.431 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1365410 00:20:45.431 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:45.431 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:45.431 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1365410' 00:20:45.431 killing process with pid 1365410 00:20:45.431 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1365410 00:20:45.431 Received shutdown signal, test time was about 10.000000 seconds 00:20:45.431 00:20:45.431 Latency(us) 00:20:45.431 [2024-11-19T09:48:33.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.431 [2024-11-19T09:48:33.054Z] =================================================================================================================== 00:20:45.431 [2024-11-19T09:48:33.055Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:45.432 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1365410 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.QihNeMcnYO 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QihNeMcnYO 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QihNeMcnYO 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QihNeMcnYO 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QihNeMcnYO 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1366732 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1366732 /var/tmp/bdevperf.sock 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1366732 ']' 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:45.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.690 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.690 [2024-11-19 10:48:33.131938] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:20:45.690 [2024-11-19 10:48:33.132023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1366732 ] 00:20:45.690 [2024-11-19 10:48:33.198332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.690 [2024-11-19 10:48:33.256929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.949 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.949 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:45.949 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QihNeMcnYO 00:20:46.206 [2024-11-19 10:48:33.625813] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.QihNeMcnYO': 0100666 00:20:46.206 [2024-11-19 10:48:33.625853] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:46.206 request: 00:20:46.207 { 00:20:46.207 "name": "key0", 00:20:46.207 "path": "/tmp/tmp.QihNeMcnYO", 00:20:46.207 "method": "keyring_file_add_key", 00:20:46.207 "req_id": 1 00:20:46.207 } 00:20:46.207 Got JSON-RPC error response 00:20:46.207 response: 00:20:46.207 { 00:20:46.207 "code": -1, 00:20:46.207 "message": "Operation not permitted" 00:20:46.207 } 00:20:46.207 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:46.465 [2024-11-19 10:48:33.894657] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:46.465 [2024-11-19 10:48:33.894723] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:46.465 request: 00:20:46.465 { 00:20:46.465 "name": "TLSTEST", 00:20:46.465 "trtype": "tcp", 00:20:46.465 "traddr": "10.0.0.2", 00:20:46.465 "adrfam": "ipv4", 00:20:46.465 "trsvcid": "4420", 00:20:46.465 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.465 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.465 "prchk_reftag": false, 00:20:46.465 "prchk_guard": false, 00:20:46.465 "hdgst": false, 00:20:46.465 "ddgst": false, 00:20:46.465 "psk": "key0", 00:20:46.465 "allow_unrecognized_csi": false, 00:20:46.465 "method": "bdev_nvme_attach_controller", 00:20:46.465 "req_id": 1 00:20:46.465 } 00:20:46.465 Got JSON-RPC error response 00:20:46.465 response: 00:20:46.465 { 00:20:46.465 "code": -126, 00:20:46.465 "message": "Required key not available" 00:20:46.465 } 00:20:46.465 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1366732 00:20:46.465 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1366732 ']' 00:20:46.465 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1366732 00:20:46.465 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:46.465 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.465 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1366732 00:20:46.465 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:46.465 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:46.465 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1366732' 00:20:46.465 killing process with pid 1366732 00:20:46.465 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1366732 00:20:46.465 Received shutdown signal, test time was about 10.000000 seconds 00:20:46.465 00:20:46.465 Latency(us) 00:20:46.465 [2024-11-19T09:48:34.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.465 [2024-11-19T09:48:34.088Z] =================================================================================================================== 00:20:46.465 [2024-11-19T09:48:34.088Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:46.465 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1366732 00:20:46.723 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:46.723 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:46.723 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:46.723 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:46.723 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:46.723 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1365120 00:20:46.723 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1365120 ']' 00:20:46.723 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1365120 00:20:46.723 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:46.723 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.723 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1365120 00:20:46.723 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:46.723 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:46.723 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1365120' 00:20:46.723 killing process with pid 1365120 00:20:46.723 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1365120 00:20:46.723 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1365120 00:20:46.981 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:46.981 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:46.981 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:46.981 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.981 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1366875 00:20:46.981 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:46.981 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1366875 00:20:46.981 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1366875 ']' 00:20:46.981 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.981 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.981 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.981 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.981 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.981 [2024-11-19 10:48:34.506098] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:20:46.981 [2024-11-19 10:48:34.506203] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.981 [2024-11-19 10:48:34.575712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.239 [2024-11-19 10:48:34.633244] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.239 [2024-11-19 10:48:34.633312] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.239 [2024-11-19 10:48:34.633343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.239 [2024-11-19 10:48:34.633354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.239 [2024-11-19 10:48:34.633363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.239 [2024-11-19 10:48:34.633945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.239 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.239 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:47.239 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:47.239 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:47.239 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.239 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.239 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.QihNeMcnYO 00:20:47.239 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:47.240 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.QihNeMcnYO 00:20:47.240 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:20:47.240 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:47.240 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:20:47.240 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:47.240 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.QihNeMcnYO 00:20:47.240 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.QihNeMcnYO 00:20:47.240 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:47.498 [2024-11-19 10:48:35.022659] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.498 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:47.756 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:48.014 [2024-11-19 10:48:35.560102] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:48.014 [2024-11-19 10:48:35.560393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.014 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:48.272 malloc0 00:20:48.272 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:48.531 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.QihNeMcnYO 00:20:48.790 [2024-11-19 10:48:36.384673] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.QihNeMcnYO': 0100666 00:20:48.790 [2024-11-19 10:48:36.384713] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:48.790 request: 00:20:48.790 { 00:20:48.790 "name": "key0", 00:20:48.790 "path": "/tmp/tmp.QihNeMcnYO", 00:20:48.790 "method": "keyring_file_add_key", 00:20:48.790 "req_id": 1 00:20:48.790 } 00:20:48.790 Got JSON-RPC error response 00:20:48.790 response: 00:20:48.790 { 00:20:48.790 "code": -1, 00:20:48.790 "message": "Operation not permitted" 00:20:48.790 } 00:20:48.790 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:49.048 [2024-11-19 10:48:36.653415] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:49.048 [2024-11-19 10:48:36.653487] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:49.048 request: 00:20:49.048 { 00:20:49.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.048 "host": "nqn.2016-06.io.spdk:host1", 00:20:49.048 "psk": "key0", 00:20:49.048 "method": "nvmf_subsystem_add_host", 00:20:49.048 "req_id": 1 00:20:49.048 } 00:20:49.048 Got JSON-RPC error response 00:20:49.048 response: 00:20:49.048 { 00:20:49.048 "code": -32603, 00:20:49.048 "message": "Internal error" 00:20:49.048 } 00:20:49.048 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:49.048 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:49.048 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:49.048 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:49.306 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1366875 00:20:49.306 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1366875 ']' 00:20:49.306 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1366875 00:20:49.306 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:49.306 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.306 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1366875 00:20:49.306 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:49.306 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:49.306 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1366875' 00:20:49.306 killing process with pid 1366875 00:20:49.306 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1366875 00:20:49.306 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1366875 00:20:49.565 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.QihNeMcnYO 00:20:49.565 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:49.565 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:49.565 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:49.565 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.565 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1367181 00:20:49.565 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:49.565 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1367181 00:20:49.565 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1367181 ']' 00:20:49.565 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.565 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.565 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.565 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.565 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.565 [2024-11-19 10:48:36.995792] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:20:49.565 [2024-11-19 10:48:36.995879] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.565 [2024-11-19 10:48:37.066519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.565 [2024-11-19 10:48:37.119035] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.565 [2024-11-19 10:48:37.119093] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.565 [2024-11-19 10:48:37.119120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.565 [2024-11-19 10:48:37.119131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.565 [2024-11-19 10:48:37.119140] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.565 [2024-11-19 10:48:37.119734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.823 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:49.823 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:49.823 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:49.823 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:49.823 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.823 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.823 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.QihNeMcnYO 00:20:49.823 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.QihNeMcnYO 00:20:49.823 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:50.082 [2024-11-19 10:48:37.559743] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.082 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:50.340 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:50.599 [2024-11-19 10:48:38.161415] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:50.599 [2024-11-19 10:48:38.161665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.599 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:50.858 malloc0 00:20:50.858 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:51.116 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.QihNeMcnYO 00:20:51.688 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:51.688 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1367468 00:20:51.688 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:51.688 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:51.688 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1367468 /var/tmp/bdevperf.sock 00:20:51.688 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1367468 ']' 00:20:51.688 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.688 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.688 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.688 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.688 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.951 [2024-11-19 10:48:39.314181] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:20:51.951 [2024-11-19 10:48:39.314264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1367468 ] 00:20:51.951 [2024-11-19 10:48:39.380768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.951 [2024-11-19 10:48:39.438204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.951 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.951 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:51.951 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QihNeMcnYO 00:20:52.517 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:52.517 [2024-11-19 10:48:40.081220] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:52.782 TLSTESTn1 00:20:52.782 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:53.106 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:53.106 "subsystems": [ 00:20:53.107 { 00:20:53.107 "subsystem": "keyring", 00:20:53.107 "config": [ 00:20:53.107 { 00:20:53.107 "method": "keyring_file_add_key", 00:20:53.107 "params": { 00:20:53.107 "name": "key0", 00:20:53.107 "path": "/tmp/tmp.QihNeMcnYO" 00:20:53.107 } 00:20:53.107 } 00:20:53.107 ] 00:20:53.107 }, 00:20:53.107 { 00:20:53.107 "subsystem": "iobuf", 00:20:53.107 "config": [ 00:20:53.107 { 00:20:53.107 "method": "iobuf_set_options", 00:20:53.107 "params": { 00:20:53.107 "small_pool_count": 8192, 00:20:53.107 "large_pool_count": 1024, 00:20:53.107 "small_bufsize": 8192, 00:20:53.107 "large_bufsize": 135168, 00:20:53.107 "enable_numa": false 00:20:53.107 } 00:20:53.107 } 00:20:53.107 ] 00:20:53.107 }, 00:20:53.107 { 00:20:53.107 "subsystem": "sock", 00:20:53.107 "config": [ 00:20:53.107 { 00:20:53.107 "method": "sock_set_default_impl", 00:20:53.107 "params": { 00:20:53.107 "impl_name": "posix" 00:20:53.107 } 00:20:53.107 }, 00:20:53.107 { 00:20:53.107 "method": "sock_impl_set_options", 00:20:53.107 "params": { 00:20:53.107 "impl_name": "ssl", 00:20:53.107 "recv_buf_size": 4096, 00:20:53.107 "send_buf_size": 4096, 00:20:53.107 "enable_recv_pipe": true, 00:20:53.107 "enable_quickack": false, 00:20:53.107 "enable_placement_id": 0, 00:20:53.107 "enable_zerocopy_send_server": true, 00:20:53.107 "enable_zerocopy_send_client": false, 00:20:53.107 "zerocopy_threshold": 0, 00:20:53.107 "tls_version": 0, 00:20:53.107 "enable_ktls": false 00:20:53.107 } 00:20:53.107 }, 00:20:53.107 { 00:20:53.107 "method": "sock_impl_set_options", 00:20:53.107 "params": { 00:20:53.107 "impl_name": "posix", 00:20:53.107 "recv_buf_size": 2097152, 00:20:53.107 "send_buf_size": 2097152, 00:20:53.107 "enable_recv_pipe": true, 00:20:53.107 "enable_quickack": false, 00:20:53.107 "enable_placement_id": 0, 00:20:53.107 "enable_zerocopy_send_server": true, 00:20:53.107 "enable_zerocopy_send_client": false, 00:20:53.107 "zerocopy_threshold": 0, 00:20:53.107 "tls_version": 0, 00:20:53.107 "enable_ktls": false 00:20:53.107 } 00:20:53.107 } 00:20:53.107 ] 00:20:53.107 }, 00:20:53.107 { 00:20:53.107 "subsystem": "vmd", 00:20:53.107 "config": [] 00:20:53.107 }, 00:20:53.107 { 00:20:53.107 "subsystem": "accel", 00:20:53.107 "config": [ 00:20:53.107 { 00:20:53.107 "method": "accel_set_options", 00:20:53.107 "params": { 00:20:53.107 "small_cache_size": 128, 00:20:53.107 "large_cache_size": 16, 00:20:53.107 "task_count": 2048, 00:20:53.107 "sequence_count": 2048, 00:20:53.107 "buf_count": 2048 00:20:53.107 } 00:20:53.107 } 00:20:53.107 ] 00:20:53.107 }, 00:20:53.107 { 00:20:53.107 "subsystem": "bdev", 00:20:53.107 "config": [ 00:20:53.107 { 00:20:53.107 "method": "bdev_set_options", 00:20:53.107 "params": { 00:20:53.107 "bdev_io_pool_size": 65535, 00:20:53.107 "bdev_io_cache_size": 256, 00:20:53.107 "bdev_auto_examine": true, 00:20:53.107 "iobuf_small_cache_size": 128, 00:20:53.107 "iobuf_large_cache_size": 16 00:20:53.107 } 00:20:53.107 }, 00:20:53.107 { 00:20:53.107 "method": "bdev_raid_set_options", 00:20:53.107 "params": { 00:20:53.107 "process_window_size_kb": 1024, 00:20:53.107 "process_max_bandwidth_mb_sec": 0 00:20:53.107 } 00:20:53.107 }, 00:20:53.107 { 00:20:53.107 "method": "bdev_iscsi_set_options", 00:20:53.107 "params": { 00:20:53.107 "timeout_sec": 30 00:20:53.107 } 00:20:53.107 }, 00:20:53.107 { 00:20:53.107 "method": "bdev_nvme_set_options", 00:20:53.107 "params": { 00:20:53.107 "action_on_timeout": "none", 00:20:53.107 "timeout_us": 0, 00:20:53.107 "timeout_admin_us": 0, 00:20:53.107 "keep_alive_timeout_ms": 10000, 00:20:53.107 "arbitration_burst": 0, 00:20:53.107 "low_priority_weight": 0, 00:20:53.107 "medium_priority_weight": 0, 00:20:53.107 "high_priority_weight": 0, 00:20:53.107 "nvme_adminq_poll_period_us": 10000, 00:20:53.107 "nvme_ioq_poll_period_us": 0, 00:20:53.107 "io_queue_requests": 0, 00:20:53.107 "delay_cmd_submit": true, 00:20:53.107 "transport_retry_count": 4, 00:20:53.107 "bdev_retry_count": 3, 00:20:53.107 "transport_ack_timeout": 0, 00:20:53.107 "ctrlr_loss_timeout_sec": 0, 00:20:53.107 "reconnect_delay_sec": 0, 00:20:53.107 "fast_io_fail_timeout_sec": 0, 00:20:53.108 "disable_auto_failback": false, 00:20:53.108 "generate_uuids": false, 00:20:53.108 "transport_tos": 0, 00:20:53.108 "nvme_error_stat": false, 00:20:53.108 "rdma_srq_size": 0, 00:20:53.108 "io_path_stat": false, 00:20:53.108 "allow_accel_sequence": false, 00:20:53.108 "rdma_max_cq_size": 0, 00:20:53.108 "rdma_cm_event_timeout_ms": 0, 00:20:53.108 "dhchap_digests": [ 00:20:53.108 "sha256", 00:20:53.108 "sha384", 00:20:53.108 "sha512" 00:20:53.108 ], 00:20:53.108 "dhchap_dhgroups": [ 00:20:53.108 "null", 00:20:53.108 "ffdhe2048", 00:20:53.108 "ffdhe3072", 00:20:53.108 "ffdhe4096", 00:20:53.108 "ffdhe6144", 00:20:53.108 "ffdhe8192" 00:20:53.108 ] 00:20:53.108 } 00:20:53.108 }, 00:20:53.108 { 00:20:53.108 "method": "bdev_nvme_set_hotplug", 00:20:53.108 "params": { 00:20:53.108 "period_us": 100000, 00:20:53.108 "enable": false 00:20:53.108 } 00:20:53.108 }, 00:20:53.108 { 00:20:53.108 "method": "bdev_malloc_create", 00:20:53.108 "params": { 00:20:53.108 "name": "malloc0", 00:20:53.108 "num_blocks": 8192, 00:20:53.108 "block_size": 4096, 00:20:53.108 "physical_block_size": 4096, 00:20:53.108 "uuid": "a0e19a03-e54b-40f2-8ab7-de035284051e", 00:20:53.108 "optimal_io_boundary": 0, 00:20:53.108 "md_size": 0, 00:20:53.108 "dif_type": 0, 00:20:53.108 "dif_is_head_of_md": false, 00:20:53.108 "dif_pi_format": 0 00:20:53.108 } 00:20:53.108 }, 00:20:53.108 { 00:20:53.108 "method": "bdev_wait_for_examine" 00:20:53.108 } 00:20:53.108 ] 00:20:53.108 }, 00:20:53.108 { 00:20:53.108 "subsystem": "nbd", 00:20:53.108 "config": [] 00:20:53.108 }, 00:20:53.108 { 00:20:53.108 "subsystem": "scheduler", 00:20:53.108 "config": [ 00:20:53.108 { 00:20:53.108 "method": "framework_set_scheduler", 00:20:53.108 "params": { 00:20:53.108 "name": "static" 00:20:53.108 } 00:20:53.108 } 00:20:53.108 ] 00:20:53.108 }, 00:20:53.108 { 00:20:53.108 "subsystem": "nvmf", 00:20:53.108 "config": [ 00:20:53.108 { 00:20:53.108 "method": "nvmf_set_config", 00:20:53.108 "params": { 00:20:53.108 "discovery_filter": "match_any", 00:20:53.108 "admin_cmd_passthru": { 00:20:53.108 "identify_ctrlr": false 00:20:53.108 }, 00:20:53.108 "dhchap_digests": [ 00:20:53.108 "sha256", 00:20:53.108 "sha384", 00:20:53.108 "sha512" 00:20:53.108 ], 00:20:53.108 "dhchap_dhgroups": [ 00:20:53.108 "null", 00:20:53.108 "ffdhe2048", 00:20:53.108 "ffdhe3072", 00:20:53.108 "ffdhe4096", 00:20:53.108 "ffdhe6144", 00:20:53.108 "ffdhe8192" 00:20:53.108 ] 00:20:53.108 } 00:20:53.108 }, 00:20:53.108 { 00:20:53.108 "method": "nvmf_set_max_subsystems", 00:20:53.108 "params": { 00:20:53.108 "max_subsystems": 1024 00:20:53.108 } 00:20:53.108 }, 00:20:53.108 { 00:20:53.108 "method": "nvmf_set_crdt", 00:20:53.108 "params": { 00:20:53.108 "crdt1": 0, 00:20:53.108 "crdt2": 0, 00:20:53.108 "crdt3": 0 00:20:53.108 } 00:20:53.108 }, 00:20:53.108 { 00:20:53.108 "method": "nvmf_create_transport", 00:20:53.108 "params": { 00:20:53.108 "trtype": "TCP", 00:20:53.108 "max_queue_depth": 128, 00:20:53.108 "max_io_qpairs_per_ctrlr": 127, 00:20:53.108 "in_capsule_data_size": 4096, 00:20:53.108 "max_io_size": 131072, 00:20:53.108 "io_unit_size": 131072, 00:20:53.108 "max_aq_depth": 128, 00:20:53.108 "num_shared_buffers": 511, 00:20:53.108 "buf_cache_size": 4294967295, 00:20:53.108 "dif_insert_or_strip": false, 00:20:53.108 "zcopy": false, 00:20:53.108 "c2h_success": false, 00:20:53.108 "sock_priority": 0, 00:20:53.108 "abort_timeout_sec": 1, 00:20:53.108 "ack_timeout": 0, 00:20:53.108 "data_wr_pool_size": 0 00:20:53.108 } 00:20:53.108 }, 00:20:53.108 { 00:20:53.108 "method": "nvmf_create_subsystem", 00:20:53.108 "params": { 00:20:53.108 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.108 "allow_any_host": false, 00:20:53.108 "serial_number": "SPDK00000000000001", 00:20:53.108 "model_number": "SPDK bdev Controller", 00:20:53.108 "max_namespaces": 10, 00:20:53.108 "min_cntlid": 1, 00:20:53.109 "max_cntlid": 65519, 00:20:53.109 "ana_reporting": false 00:20:53.109 } 00:20:53.109 }, 00:20:53.109 { 00:20:53.109 "method": "nvmf_subsystem_add_host", 00:20:53.109 "params": { 00:20:53.109 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.109 "host": "nqn.2016-06.io.spdk:host1", 00:20:53.109 "psk": "key0" 00:20:53.109 } 00:20:53.109 }, 00:20:53.109 { 00:20:53.109 "method": "nvmf_subsystem_add_ns", 00:20:53.109 "params": { 00:20:53.109 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.109 "namespace": { 00:20:53.109 "nsid": 1, 00:20:53.109 "bdev_name": "malloc0", 00:20:53.109 "nguid": "A0E19A03E54B40F28AB7DE035284051E", 00:20:53.109 "uuid": "a0e19a03-e54b-40f2-8ab7-de035284051e", 00:20:53.109 "no_auto_visible": false 00:20:53.109 } 00:20:53.109 } 00:20:53.109 }, 00:20:53.109 { 00:20:53.109 "method": "nvmf_subsystem_add_listener", 00:20:53.109 "params": { 00:20:53.109 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.109 "listen_address": { 00:20:53.109 "trtype": "TCP", 00:20:53.109 "adrfam": "IPv4", 00:20:53.109 "traddr": "10.0.0.2", 00:20:53.109 "trsvcid": "4420" 00:20:53.109 }, 00:20:53.109 "secure_channel": true 00:20:53.109 } 00:20:53.109 } 00:20:53.109 ] 00:20:53.109 } 00:20:53.109 ] 00:20:53.109 }' 00:20:53.109 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:53.391 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:53.391 "subsystems": [ 00:20:53.391 { 00:20:53.391 "subsystem": "keyring", 00:20:53.391 "config": [ 00:20:53.391 { 00:20:53.391 "method": "keyring_file_add_key", 00:20:53.391 "params": { 00:20:53.391 "name": "key0", 00:20:53.391 "path": "/tmp/tmp.QihNeMcnYO" 00:20:53.391 } 00:20:53.391 } 00:20:53.391 ] 00:20:53.391 }, 00:20:53.391 { 00:20:53.391 "subsystem": "iobuf", 00:20:53.391 "config": [ 00:20:53.391 { 00:20:53.391 "method": "iobuf_set_options", 00:20:53.391 "params": { 00:20:53.391 "small_pool_count": 8192, 00:20:53.391 "large_pool_count": 1024, 00:20:53.391 "small_bufsize": 8192, 00:20:53.391 "large_bufsize": 135168, 00:20:53.391 "enable_numa": false 00:20:53.391 } 00:20:53.391 } 00:20:53.391 ] 00:20:53.391 }, 00:20:53.391 { 00:20:53.391 "subsystem": "sock", 00:20:53.391 "config": [ 00:20:53.391 { 00:20:53.391 "method": "sock_set_default_impl", 00:20:53.391 "params": { 00:20:53.391 "impl_name": "posix" 00:20:53.391 } 00:20:53.391 }, 00:20:53.391 { 00:20:53.391 "method": "sock_impl_set_options", 00:20:53.391 "params": { 00:20:53.391 "impl_name": "ssl", 00:20:53.391 "recv_buf_size": 4096, 00:20:53.391 "send_buf_size": 4096, 00:20:53.391 "enable_recv_pipe": true, 00:20:53.391 "enable_quickack": false, 00:20:53.391 "enable_placement_id": 0, 00:20:53.391 "enable_zerocopy_send_server": true, 00:20:53.391 "enable_zerocopy_send_client": false, 00:20:53.391 "zerocopy_threshold": 0, 00:20:53.391 "tls_version": 0, 00:20:53.391 "enable_ktls": false 00:20:53.391 } 00:20:53.391 }, 00:20:53.391 { 00:20:53.391 "method": "sock_impl_set_options", 00:20:53.391 "params": { 00:20:53.391 "impl_name": "posix", 00:20:53.391 "recv_buf_size": 2097152, 00:20:53.391 "send_buf_size": 2097152, 00:20:53.391 "enable_recv_pipe": true, 00:20:53.391 "enable_quickack": false, 00:20:53.391 "enable_placement_id": 0, 00:20:53.391 "enable_zerocopy_send_server": true, 00:20:53.391 "enable_zerocopy_send_client": false, 00:20:53.391 "zerocopy_threshold": 0, 00:20:53.391 "tls_version": 0, 00:20:53.391 "enable_ktls": false 00:20:53.391 } 00:20:53.391 } 00:20:53.391 ] 00:20:53.391 }, 00:20:53.391 { 00:20:53.391 "subsystem": "vmd", 00:20:53.391 "config": [] 00:20:53.391 }, 00:20:53.391 { 00:20:53.391 "subsystem": "accel", 00:20:53.391 "config": [ 00:20:53.391 { 00:20:53.391 "method": "accel_set_options", 00:20:53.391 "params": { 00:20:53.391 "small_cache_size": 128, 00:20:53.391 "large_cache_size": 16, 00:20:53.391 "task_count": 2048, 00:20:53.391 "sequence_count": 2048, 00:20:53.391 "buf_count": 2048 00:20:53.392 } 00:20:53.392 } 00:20:53.392 ] 00:20:53.392 }, 00:20:53.392 { 00:20:53.392 "subsystem": "bdev", 00:20:53.392 "config": [ 00:20:53.392 { 00:20:53.392 "method": "bdev_set_options", 00:20:53.392 "params": { 00:20:53.392 "bdev_io_pool_size": 65535, 00:20:53.392 "bdev_io_cache_size": 256, 00:20:53.392 "bdev_auto_examine": true, 00:20:53.392 "iobuf_small_cache_size": 128, 00:20:53.392 "iobuf_large_cache_size": 16 00:20:53.392 } 00:20:53.392 }, 00:20:53.392 { 00:20:53.392 "method": "bdev_raid_set_options", 00:20:53.392 "params": { 00:20:53.392 "process_window_size_kb": 1024, 00:20:53.392 "process_max_bandwidth_mb_sec": 0 00:20:53.392 } 00:20:53.392 }, 00:20:53.392 { 00:20:53.392 "method": "bdev_iscsi_set_options", 00:20:53.392 "params": { 00:20:53.392 "timeout_sec": 30 00:20:53.392 } 00:20:53.392 }, 00:20:53.392 { 00:20:53.392 "method": "bdev_nvme_set_options", 00:20:53.392 "params": { 00:20:53.392 "action_on_timeout": "none", 00:20:53.392 "timeout_us": 0, 00:20:53.392 "timeout_admin_us": 0, 00:20:53.392 "keep_alive_timeout_ms": 10000, 00:20:53.392 "arbitration_burst": 0, 00:20:53.392 "low_priority_weight": 0, 00:20:53.392 "medium_priority_weight": 0, 00:20:53.392 "high_priority_weight": 0, 00:20:53.392 "nvme_adminq_poll_period_us": 10000, 00:20:53.392 "nvme_ioq_poll_period_us": 0, 00:20:53.392 "io_queue_requests": 512, 00:20:53.392 "delay_cmd_submit": true, 00:20:53.392 "transport_retry_count": 4, 00:20:53.392 "bdev_retry_count": 3, 00:20:53.392 "transport_ack_timeout": 0, 00:20:53.392 "ctrlr_loss_timeout_sec": 0, 00:20:53.392 "reconnect_delay_sec": 0, 00:20:53.392 "fast_io_fail_timeout_sec": 0, 00:20:53.392 "disable_auto_failback": false, 00:20:53.392 "generate_uuids": false, 00:20:53.392 "transport_tos": 0, 00:20:53.392 "nvme_error_stat": false, 00:20:53.392 "rdma_srq_size": 0, 00:20:53.392 "io_path_stat": false, 00:20:53.392 "allow_accel_sequence": false, 00:20:53.392 "rdma_max_cq_size": 0, 00:20:53.392 "rdma_cm_event_timeout_ms": 0, 00:20:53.392 "dhchap_digests": [ 00:20:53.392 "sha256", 00:20:53.392 "sha384", 00:20:53.392 "sha512" 00:20:53.392 ], 00:20:53.392 "dhchap_dhgroups": [ 00:20:53.392 "null", 00:20:53.392 "ffdhe2048", 00:20:53.392 "ffdhe3072", 00:20:53.392 "ffdhe4096", 00:20:53.392 "ffdhe6144", 00:20:53.392 "ffdhe8192" 00:20:53.392 ] 00:20:53.392 } 00:20:53.392 }, 00:20:53.392 { 00:20:53.392 "method": "bdev_nvme_attach_controller", 00:20:53.392 "params": { 00:20:53.392 "name": "TLSTEST", 00:20:53.392 "trtype": "TCP", 00:20:53.392 "adrfam": "IPv4", 00:20:53.392 "traddr": "10.0.0.2", 00:20:53.392 "trsvcid": "4420", 00:20:53.392 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.392 "prchk_reftag": false, 00:20:53.392 "prchk_guard": false, 00:20:53.392 "ctrlr_loss_timeout_sec": 0, 00:20:53.392 "reconnect_delay_sec": 0, 00:20:53.392 "fast_io_fail_timeout_sec": 0, 00:20:53.392 "psk": "key0", 00:20:53.392 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:53.392 "hdgst": false, 00:20:53.392 "ddgst": false, 00:20:53.392 "multipath": "multipath" 00:20:53.392 } 00:20:53.392 }, 00:20:53.392 { 00:20:53.392 "method": "bdev_nvme_set_hotplug", 00:20:53.392 "params": { 00:20:53.392 "period_us": 100000, 00:20:53.392 "enable": false 00:20:53.392 } 00:20:53.392 }, 00:20:53.392 { 00:20:53.392 "method": "bdev_wait_for_examine" 00:20:53.392 } 00:20:53.392 ] 00:20:53.392 }, 00:20:53.392 { 00:20:53.392 "subsystem": "nbd", 00:20:53.392 "config": [] 00:20:53.392 } 00:20:53.392 ] 00:20:53.392 }' 00:20:53.392 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1367468 00:20:53.392 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1367468 ']' 00:20:53.392 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1367468 00:20:53.392 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:53.392 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:53.392 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1367468 00:20:53.392 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:53.392 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:53.392 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1367468' 00:20:53.392 killing process with pid 1367468 00:20:53.392 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1367468 00:20:53.392 Received shutdown signal, test time was about 10.000000 seconds 00:20:53.392 00:20:53.392 Latency(us) 00:20:53.392 [2024-11-19T09:48:41.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.392 [2024-11-19T09:48:41.015Z] =================================================================================================================== 00:20:53.392 [2024-11-19T09:48:41.015Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:53.392 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1367468 00:20:53.650 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1367181 00:20:53.650 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1367181 ']' 00:20:53.650 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1367181 00:20:53.650 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:53.650 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:53.650 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1367181 00:20:53.650 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:53.650 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:53.650 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1367181' 00:20:53.650 killing process with pid 1367181 00:20:53.650 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1367181 00:20:53.650 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1367181 00:20:53.909 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:53.909 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:53.909 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:53.909 "subsystems": [ 00:20:53.909 { 00:20:53.909 "subsystem": "keyring", 00:20:53.909 "config": [ 00:20:53.909 { 00:20:53.909 "method": "keyring_file_add_key", 00:20:53.909 "params": { 00:20:53.909 "name": "key0", 00:20:53.909 "path": "/tmp/tmp.QihNeMcnYO" 00:20:53.909 } 00:20:53.909 } 00:20:53.909 ] 00:20:53.909 }, 00:20:53.909 { 00:20:53.909 "subsystem": "iobuf", 00:20:53.909 "config": [ 00:20:53.909 { 00:20:53.909 "method": "iobuf_set_options", 00:20:53.909 "params": { 00:20:53.909 "small_pool_count": 8192, 00:20:53.909 "large_pool_count": 1024, 00:20:53.909 "small_bufsize": 8192, 00:20:53.909 "large_bufsize": 135168, 00:20:53.909 "enable_numa": false 00:20:53.909 } 00:20:53.909 } 00:20:53.909 ] 00:20:53.909 }, 00:20:53.909 { 00:20:53.909 "subsystem": "sock", 00:20:53.909 "config": [ 00:20:53.909 { 00:20:53.909 "method": "sock_set_default_impl", 00:20:53.909 "params": { 00:20:53.909 "impl_name": "posix" 00:20:53.909 } 00:20:53.909 }, 00:20:53.909 { 00:20:53.909 "method": "sock_impl_set_options", 00:20:53.909 "params": { 00:20:53.909 "impl_name": "ssl", 00:20:53.909 "recv_buf_size": 4096, 00:20:53.909 "send_buf_size": 4096, 00:20:53.909 "enable_recv_pipe": true, 00:20:53.909 "enable_quickack": false, 00:20:53.909 "enable_placement_id": 0, 00:20:53.909 "enable_zerocopy_send_server": true, 00:20:53.909 "enable_zerocopy_send_client": false, 00:20:53.909 "zerocopy_threshold": 0, 00:20:53.909 "tls_version": 0, 00:20:53.909 "enable_ktls": false 00:20:53.909 } 00:20:53.909 }, 00:20:53.909 { 00:20:53.909 "method": "sock_impl_set_options", 00:20:53.909 "params": { 00:20:53.909 "impl_name": "posix", 00:20:53.909 "recv_buf_size": 2097152, 00:20:53.909 "send_buf_size": 2097152, 00:20:53.909 "enable_recv_pipe": true, 00:20:53.909 "enable_quickack": false, 00:20:53.909 "enable_placement_id": 0, 00:20:53.909 "enable_zerocopy_send_server": true, 00:20:53.909 "enable_zerocopy_send_client": false, 00:20:53.909 "zerocopy_threshold": 0, 00:20:53.909 "tls_version": 0, 00:20:53.909 "enable_ktls": false 00:20:53.909 } 00:20:53.909 } 00:20:53.909 ] 00:20:53.909 }, 00:20:53.909 { 00:20:53.909 "subsystem": "vmd", 00:20:53.909 "config": [] 00:20:53.909 }, 00:20:53.909 { 00:20:53.909 "subsystem": "accel", 00:20:53.909 "config": [ 00:20:53.909 { 00:20:53.909 "method": "accel_set_options", 00:20:53.909 "params": { 00:20:53.909 "small_cache_size": 128, 00:20:53.909 "large_cache_size": 16, 00:20:53.909 "task_count": 2048, 00:20:53.909 "sequence_count": 2048, 00:20:53.909 "buf_count": 2048 00:20:53.909 } 00:20:53.909 } 00:20:53.909 ] 00:20:53.909 }, 00:20:53.909 { 00:20:53.909 "subsystem": "bdev", 00:20:53.909 "config": [ 00:20:53.909 { 00:20:53.909 "method": "bdev_set_options", 00:20:53.909 "params": { 00:20:53.909 "bdev_io_pool_size": 65535, 00:20:53.909 "bdev_io_cache_size": 256, 00:20:53.909 "bdev_auto_examine": true, 00:20:53.909 "iobuf_small_cache_size": 128, 00:20:53.909 "iobuf_large_cache_size": 16 00:20:53.909 } 00:20:53.909 }, 00:20:53.909 { 00:20:53.909 "method": "bdev_raid_set_options", 00:20:53.909 "params": { 00:20:53.909 "process_window_size_kb": 1024, 00:20:53.909 "process_max_bandwidth_mb_sec": 0 00:20:53.909 } 00:20:53.909 }, 00:20:53.909 { 00:20:53.909 "method": "bdev_iscsi_set_options", 00:20:53.909 "params": { 00:20:53.909 "timeout_sec": 30 00:20:53.909 } 00:20:53.909 }, 00:20:53.909 { 00:20:53.909 "method": "bdev_nvme_set_options", 00:20:53.909 "params": { 00:20:53.909 "action_on_timeout": "none", 00:20:53.909 "timeout_us": 0, 00:20:53.909 "timeout_admin_us": 0, 00:20:53.909 "keep_alive_timeout_ms": 10000, 00:20:53.909 "arbitration_burst": 0, 00:20:53.909 "low_priority_weight": 0, 00:20:53.909 "medium_priority_weight": 0, 00:20:53.909 "high_priority_weight": 0, 00:20:53.909 "nvme_adminq_poll_period_us": 10000, 00:20:53.909 "nvme_ioq_poll_period_us": 0, 00:20:53.909 "io_queue_requests": 0, 00:20:53.909 "delay_cmd_submit": true, 00:20:53.909 "transport_retry_count": 4, 00:20:53.909 "bdev_retry_count": 3, 00:20:53.909 "transport_ack_timeout": 0, 00:20:53.909 "ctrlr_loss_timeout_sec": 0, 00:20:53.909 "reconnect_delay_sec": 0, 00:20:53.910 "fast_io_fail_timeout_sec": 0, 00:20:53.910 "disable_auto_failback": false, 00:20:53.910 "generate_uuids": false, 00:20:53.910 "transport_tos": 0, 00:20:53.910 "nvme_error_stat": false, 00:20:53.910 "rdma_srq_size": 0, 00:20:53.910 "io_path_stat": false, 00:20:53.910 "allow_accel_sequence": false, 00:20:53.910 "rdma_max_cq_size": 0, 00:20:53.910 "rdma_cm_event_timeout_ms": 0, 00:20:53.910 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:53.910 "dhchap_digests": [ 00:20:53.910 "sha256", 00:20:53.910 "sha384", 00:20:53.910 "sha512" 00:20:53.910 ], 00:20:53.910 "dhchap_dhgroups": [ 00:20:53.910 "null", 00:20:53.910 "ffdhe2048", 00:20:53.910 "ffdhe3072", 00:20:53.910 "ffdhe4096", 00:20:53.910 "ffdhe6144", 00:20:53.910 "ffdhe8192" 00:20:53.910 ] 00:20:53.910 } 00:20:53.910 }, 00:20:53.910 { 00:20:53.910 "method": "bdev_nvme_set_hotplug", 00:20:53.910 "params": { 00:20:53.910 "period_us": 100000, 00:20:53.910 "enable": false 00:20:53.910 } 00:20:53.910 }, 00:20:53.910 { 00:20:53.910 "method": "bdev_malloc_create", 00:20:53.910 "params": { 00:20:53.910 "name": "malloc0", 00:20:53.910 "num_blocks": 8192, 00:20:53.910 "block_size": 4096, 00:20:53.910 "physical_block_size": 4096, 00:20:53.910 "uuid": "a0e19a03-e54b-40f2-8ab7-de035284051e", 00:20:53.910 "optimal_io_boundary": 0, 00:20:53.910 "md_size": 0, 00:20:53.910 "dif_type": 0, 00:20:53.910 "dif_is_head_of_md": false, 00:20:53.910 "dif_pi_format": 0 00:20:53.910 } 00:20:53.910 }, 00:20:53.910 { 00:20:53.910 "method": "bdev_wait_for_examine" 00:20:53.910 } 00:20:53.910 ] 00:20:53.910 }, 00:20:53.910 { 00:20:53.910 "subsystem": "nbd", 00:20:53.910 "config": [] 00:20:53.910 }, 00:20:53.910 { 00:20:53.910 "subsystem": "scheduler", 00:20:53.910 "config": [ 00:20:53.910 { 00:20:53.910 "method": "framework_set_scheduler", 00:20:53.910 "params": { 00:20:53.910 "name": "static" 00:20:53.910 } 00:20:53.910 } 00:20:53.910 ] 00:20:53.910 }, 00:20:53.910 { 00:20:53.910 "subsystem": "nvmf", 00:20:53.910 "config": [ 00:20:53.910 { 00:20:53.910 "method": "nvmf_set_config", 00:20:53.910 "params": { 00:20:53.910 "discovery_filter": "match_any", 00:20:53.910 "admin_cmd_passthru": { 00:20:53.910 "identify_ctrlr": false 00:20:53.910 }, 00:20:53.910 "dhchap_digests": [ 00:20:53.910 "sha256", 00:20:53.910 "sha384", 00:20:53.910 "sha512" 00:20:53.910 ], 00:20:53.910 "dhchap_dhgroups": [ 00:20:53.910 "null", 00:20:53.910 "ffdhe2048", 00:20:53.910 "ffdhe3072", 00:20:53.910 "ffdhe4096", 00:20:53.910 "ffdhe6144", 00:20:53.910 "ffdhe8192" 00:20:53.910 ] 00:20:53.910 } 00:20:53.910 }, 00:20:53.910 { 00:20:53.910 "method": "nvmf_set_max_subsystems", 00:20:53.910 "params": { 00:20:53.910 "max_subsystems": 1024 00:20:53.910 } 00:20:53.910 }, 00:20:53.910 { 00:20:53.910 "method": "nvmf_set_crdt", 00:20:53.910 "params": { 00:20:53.910 "crdt1": 0, 00:20:53.910 "crdt2": 0, 00:20:53.910 "crdt3": 0 00:20:53.910 } 00:20:53.910 }, 00:20:53.910 { 00:20:53.910 "method": "nvmf_create_transport", 00:20:53.910 "params": { 00:20:53.910 "trtype": "TCP", 00:20:53.910 "max_queue_depth": 128, 00:20:53.910 "max_io_qpairs_per_ctrlr": 127, 00:20:53.910 "in_capsule_data_size": 4096, 00:20:53.910 "max_io_size": 131072, 00:20:53.910 "io_unit_size": 131072, 00:20:53.910 "max_aq_depth": 128, 00:20:53.910 "num_shared_buffers": 511, 00:20:53.910 "buf_cache_size": 4294967295, 00:20:53.910 "dif_insert_or_strip": false, 00:20:53.910 "zcopy": false, 00:20:53.910 "c2h_success": false, 00:20:53.910 "sock_priority": 0, 00:20:53.910 "abort_timeout_sec": 1, 00:20:53.910 "ack_timeout": 0, 00:20:53.910 "data_wr_pool_size": 0 00:20:53.910 } 00:20:53.910 }, 00:20:53.910 { 00:20:53.910 "method": "nvmf_create_subsystem", 00:20:53.910 "params": { 00:20:53.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.910 "allow_any_host": false, 00:20:53.910 "serial_number": "SPDK00000000000001", 00:20:53.910 "model_number": "SPDK bdev Controller", 00:20:53.910 "max_namespaces": 10, 00:20:53.910 "min_cntlid": 1, 00:20:53.910 "max_cntlid": 65519, 00:20:53.910 "ana_reporting": false 00:20:53.910 } 00:20:53.910 }, 00:20:53.910 { 00:20:53.910 "method": "nvmf_subsystem_add_host", 00:20:53.910 "params": { 00:20:53.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.910 "host": "nqn.2016-06.io.spdk:host1", 00:20:53.910 "psk": "key0" 00:20:53.910 } 00:20:53.910 }, 00:20:53.910 { 00:20:53.910 "method": "nvmf_subsystem_add_ns", 00:20:53.910 "params": { 00:20:53.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.910 "namespace": { 00:20:53.910 "nsid": 1, 00:20:53.910 "bdev_name": "malloc0", 00:20:53.910 "nguid": "A0E19A03E54B40F28AB7DE035284051E", 00:20:53.910 "uuid": "a0e19a03-e54b-40f2-8ab7-de035284051e", 00:20:53.910 "no_auto_visible": false 00:20:53.910 } 00:20:53.910 } 00:20:53.910 }, 00:20:53.910 { 00:20:53.910 "method": "nvmf_subsystem_add_listener", 00:20:53.910 "params": { 00:20:53.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.910 "listen_address": { 00:20:53.910 "trtype": "TCP", 00:20:53.910 "adrfam": "IPv4", 00:20:53.910 "traddr": "10.0.0.2", 00:20:53.910 "trsvcid": "4420" 00:20:53.910 }, 00:20:53.910 "secure_channel": true 00:20:53.910 } 00:20:53.910 } 00:20:53.910 ] 00:20:53.910 } 00:20:53.910 ] 00:20:53.910 }' 00:20:53.910 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.910 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1367750 00:20:53.910 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:53.910 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1367750 00:20:53.910 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1367750 ']' 00:20:53.910 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.910 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.910 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.911 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.911 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.911 [2024-11-19 10:48:41.466950] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:20:53.911 [2024-11-19 10:48:41.467040] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.169 [2024-11-19 10:48:41.540044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.169 [2024-11-19 10:48:41.596979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.169 [2024-11-19 10:48:41.597031] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.169 [2024-11-19 10:48:41.597044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.169 [2024-11-19 10:48:41.597055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.169 [2024-11-19 10:48:41.597065] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.169 [2024-11-19 10:48:41.597725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.428 [2024-11-19 10:48:41.840558] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.428 [2024-11-19 10:48:41.872568] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:54.428 [2024-11-19 10:48:41.872805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.993 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.993 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:54.993 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:54.993 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:54.993 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.993 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.993 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1367898 00:20:54.994 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1367898 /var/tmp/bdevperf.sock 00:20:54.994 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1367898 ']' 00:20:54.994 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.994 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:54.994 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.994 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.994 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:54.994 "subsystems": [ 00:20:54.994 { 00:20:54.994 "subsystem": "keyring", 00:20:54.994 "config": [ 00:20:54.994 { 00:20:54.994 "method": "keyring_file_add_key", 00:20:54.994 "params": { 00:20:54.994 "name": "key0", 00:20:54.994 "path": "/tmp/tmp.QihNeMcnYO" 00:20:54.994 } 00:20:54.994 } 00:20:54.994 ] 00:20:54.994 }, 00:20:54.994 { 00:20:54.994 "subsystem": "iobuf", 00:20:54.994 "config": [ 00:20:54.994 { 00:20:54.994 "method": "iobuf_set_options", 00:20:54.994 "params": { 00:20:54.994 "small_pool_count": 8192, 00:20:54.994 "large_pool_count": 1024, 00:20:54.994 "small_bufsize": 8192, 00:20:54.994 "large_bufsize": 135168, 00:20:54.994 "enable_numa": false 00:20:54.994 } 00:20:54.994 } 00:20:54.994 ] 00:20:54.994 }, 00:20:54.994 { 00:20:54.994 "subsystem": "sock", 00:20:54.994 "config": [ 00:20:54.994 { 00:20:54.994 "method": "sock_set_default_impl", 00:20:54.994 "params": { 00:20:54.994 "impl_name": "posix" 00:20:54.994 } 00:20:54.994 }, 00:20:54.994 { 00:20:54.994 "method": "sock_impl_set_options", 00:20:54.994 "params": { 00:20:54.994 "impl_name": "ssl", 00:20:54.994 "recv_buf_size": 4096, 00:20:54.994 "send_buf_size": 4096, 00:20:54.994 "enable_recv_pipe": true, 00:20:54.994 "enable_quickack": false, 00:20:54.994 "enable_placement_id": 0, 00:20:54.994 "enable_zerocopy_send_server": true, 00:20:54.994 "enable_zerocopy_send_client": false, 00:20:54.994 "zerocopy_threshold": 0, 00:20:54.994 "tls_version": 0, 00:20:54.994 "enable_ktls": false 00:20:54.994 } 00:20:54.994 }, 00:20:54.994 { 00:20:54.994 "method": "sock_impl_set_options", 00:20:54.994 "params": { 00:20:54.994 "impl_name": "posix", 00:20:54.994 "recv_buf_size": 2097152, 00:20:54.994 "send_buf_size": 2097152, 00:20:54.994 "enable_recv_pipe": true, 00:20:54.994 "enable_quickack": false, 00:20:54.994 "enable_placement_id": 0, 00:20:54.994 "enable_zerocopy_send_server": true, 00:20:54.994 "enable_zerocopy_send_client": false, 00:20:54.994 "zerocopy_threshold": 0, 00:20:54.994 "tls_version": 0, 00:20:54.994 "enable_ktls": false 00:20:54.994 } 00:20:54.994 } 00:20:54.994 ] 00:20:54.994 }, 00:20:54.994 { 00:20:54.994 "subsystem": "vmd", 00:20:54.994 "config": [] 00:20:54.994 }, 00:20:54.994 { 00:20:54.994 "subsystem": "accel", 00:20:54.994 "config": [ 00:20:54.994 { 00:20:54.994 "method": "accel_set_options", 00:20:54.994 "params": { 00:20:54.994 "small_cache_size": 128, 00:20:54.994 "large_cache_size": 16, 00:20:54.994 "task_count": 2048, 00:20:54.994 "sequence_count": 2048, 00:20:54.994 "buf_count": 2048 00:20:54.994 } 00:20:54.994 } 00:20:54.994 ] 00:20:54.994 }, 00:20:54.994 { 00:20:54.994 "subsystem": "bdev", 00:20:54.994 "config": [ 00:20:54.994 { 00:20:54.994 "method": "bdev_set_options", 00:20:54.994 "params": { 00:20:54.994 "bdev_io_pool_size": 65535, 00:20:54.994 "bdev_io_cache_size": 256, 00:20:54.994 "bdev_auto_examine": true, 00:20:54.994 "iobuf_small_cache_size": 128, 00:20:54.994 "iobuf_large_cache_size": 16 00:20:54.994 } 00:20:54.994 }, 00:20:54.994 { 00:20:54.994 "method": "bdev_raid_set_options", 00:20:54.994 "params": { 00:20:54.994 "process_window_size_kb": 1024, 00:20:54.994 "process_max_bandwidth_mb_sec": 0 00:20:54.994 } 00:20:54.994 }, 00:20:54.994 { 00:20:54.994 "method": "bdev_iscsi_set_options", 00:20:54.994 "params": { 00:20:54.994 "timeout_sec": 30 00:20:54.994 } 00:20:54.994 }, 00:20:54.994 { 00:20:54.994 "method": "bdev_nvme_set_options", 00:20:54.994 "params": { 00:20:54.994 "action_on_timeout": "none", 00:20:54.994 "timeout_us": 0, 00:20:54.994 "timeout_admin_us": 0, 00:20:54.994 "keep_alive_timeout_ms": 10000, 00:20:54.994 "arbitration_burst": 0, 00:20:54.994 "low_priority_weight": 0, 00:20:54.994 "medium_priority_weight": 0, 00:20:54.994 "high_priority_weight": 0, 00:20:54.994 "nvme_adminq_poll_period_us": 10000, 00:20:54.994 "nvme_ioq_poll_period_us": 0, 00:20:54.994 "io_queue_requests": 512, 00:20:54.994 "delay_cmd_submit": true, 00:20:54.994 "transport_retry_count": 4, 00:20:54.994 "bdev_retry_count": 3, 00:20:54.994 "transport_ack_timeout": 0, 00:20:54.994 "ctrlr_loss_timeout_sec": 0, 00:20:54.994 "reconnect_delay_sec": 0, 00:20:54.994 "fast_io_fail_timeout_sec": 0, 00:20:54.994 "disable_auto_failback": false, 00:20:54.994 "generate_uuids": false, 00:20:54.994 "transport_tos": 0, 00:20:54.994 "nvme_error_stat": false, 00:20:54.994 "rdma_srq_size": 0, 00:20:54.995 "io_path_stat": false, 00:20:54.995 "allow_accel_sequence": false, 00:20:54.995 "rdma_max_cq_size": 0, 00:20:54.995 "rdma_cm_event_timeout_ms": 0, 00:20:54.995 "dhchap_digests": [ 00:20:54.995 "sha256", 00:20:54.995 "sha384", 00:20:54.995 "sha512" 00:20:54.995 ], 00:20:54.995 "dhchap_dhgroups": [ 00:20:54.995 "null", 00:20:54.995 "ffdhe2048", 00:20:54.995 "ffdhe3072", 00:20:54.995 "ffdhe4096", 00:20:54.995 "ffdhe6144", 00:20:54.995 "ffdhe8192" 00:20:54.995 ] 00:20:54.995 } 00:20:54.995 }, 00:20:54.995 { 00:20:54.995 "method": "bdev_nvme_attach_controller", 00:20:54.995 "params": { 00:20:54.995 "name": "TLSTEST", 00:20:54.995 "trtype": "TCP", 00:20:54.995 "adrfam": "IPv4", 00:20:54.995 "traddr": "10.0.0.2", 00:20:54.995 "trsvcid": "4420", 00:20:54.995 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.995 "prchk_reftag": false, 00:20:54.995 "prchk_guard": false, 00:20:54.995 "ctrlr_loss_timeout_sec": 0, 00:20:54.995 "reconnect_delay_sec": 0, 00:20:54.995 "fast_io_fail_timeout_sec": 0, 00:20:54.995 "psk": "key0", 00:20:54.995 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:54.995 "hdgst": false, 00:20:54.995 "ddgst": false, 00:20:54.995 "multipath": "multipath" 00:20:54.995 } 00:20:54.995 }, 00:20:54.995 { 00:20:54.995 "method": "bdev_nvme_set_hotplug", 00:20:54.995 "params": { 00:20:54.995 "period_us": 100000, 00:20:54.995 "enable": false 00:20:54.995 } 00:20:54.995 }, 00:20:54.995 { 00:20:54.995 "method": "bdev_wait_for_examine" 00:20:54.995 } 00:20:54.995 ] 00:20:54.995 }, 00:20:54.995 { 00:20:54.995 "subsystem": "nbd", 00:20:54.995 "config": [] 00:20:54.995 } 00:20:54.995 ] 00:20:54.995 }' 00:20:54.995 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.995 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.995 [2024-11-19 10:48:42.576764] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:20:54.995 [2024-11-19 10:48:42.576840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1367898 ] 00:20:55.253 [2024-11-19 10:48:42.646060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.253 [2024-11-19 10:48:42.706250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.510 [2024-11-19 10:48:42.878478] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:55.510 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.510 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:55.510 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:55.510 Running I/O for 10 seconds... 00:20:57.813 3446.00 IOPS, 13.46 MiB/s [2024-11-19T09:48:46.369Z] 3492.50 IOPS, 13.64 MiB/s [2024-11-19T09:48:47.303Z] 3486.33 IOPS, 13.62 MiB/s [2024-11-19T09:48:48.237Z] 3511.00 IOPS, 13.71 MiB/s [2024-11-19T09:48:49.168Z] 3512.60 IOPS, 13.72 MiB/s [2024-11-19T09:48:50.538Z] 3513.33 IOPS, 13.72 MiB/s [2024-11-19T09:48:51.472Z] 3518.14 IOPS, 13.74 MiB/s [2024-11-19T09:48:52.406Z] 3518.88 IOPS, 13.75 MiB/s [2024-11-19T09:48:53.339Z] 3524.11 IOPS, 13.77 MiB/s [2024-11-19T09:48:53.339Z] 3524.60 IOPS, 13.77 MiB/s 00:21:05.716 Latency(us) 00:21:05.716 [2024-11-19T09:48:53.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.716 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:05.716 Verification LBA range: start 0x0 length 0x2000 00:21:05.716 TLSTESTn1 : 10.02 3529.60 13.79 0.00 0.00 36199.30 6019.60 39030.33 00:21:05.716 [2024-11-19T09:48:53.339Z] =================================================================================================================== 00:21:05.716 [2024-11-19T09:48:53.339Z] Total : 3529.60 13.79 0.00 0.00 36199.30 6019.60 39030.33 00:21:05.716 { 00:21:05.716 "results": [ 00:21:05.716 { 00:21:05.716 "job": "TLSTESTn1", 00:21:05.716 "core_mask": "0x4", 00:21:05.716 "workload": "verify", 00:21:05.716 "status": "finished", 00:21:05.716 "verify_range": { 00:21:05.716 "start": 0, 00:21:05.716 "length": 8192 00:21:05.716 }, 00:21:05.716 "queue_depth": 128, 00:21:05.716 "io_size": 4096, 00:21:05.716 "runtime": 10.021544, 00:21:05.716 "iops": 3529.5958387250507, 00:21:05.716 "mibps": 13.78748374501973, 00:21:05.716 "io_failed": 0, 00:21:05.716 "io_timeout": 0, 00:21:05.716 "avg_latency_us": 36199.29554592249, 00:21:05.716 "min_latency_us": 6019.602962962963, 00:21:05.716 "max_latency_us": 39030.328888888886 00:21:05.716 } 00:21:05.716 ], 00:21:05.716 "core_count": 1 00:21:05.716 } 00:21:05.716 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:05.716 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1367898 00:21:05.716 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1367898 ']' 00:21:05.716 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1367898 00:21:05.716 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:05.716 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:05.716 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1367898 00:21:05.716 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:05.716 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:05.716 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1367898' 00:21:05.716 killing process with pid 1367898 00:21:05.716 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1367898 00:21:05.716 Received shutdown signal, test time was about 10.000000 seconds 00:21:05.716 00:21:05.716 Latency(us) 00:21:05.716 [2024-11-19T09:48:53.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.716 [2024-11-19T09:48:53.339Z] =================================================================================================================== 00:21:05.716 [2024-11-19T09:48:53.340Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:05.717 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1367898 00:21:05.974 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1367750 00:21:05.974 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1367750 ']' 00:21:05.974 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1367750 00:21:05.974 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:05.974 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:05.974 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1367750 00:21:05.974 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:05.974 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:05.974 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1367750' 00:21:05.974 killing process with pid 1367750 00:21:05.974 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1367750 00:21:05.974 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1367750 00:21:06.232 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:21:06.232 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:06.232 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:06.232 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.232 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1369222 00:21:06.232 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:06.232 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1369222 00:21:06.232 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1369222 ']' 00:21:06.232 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.232 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:06.232 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.232 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:06.232 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.232 [2024-11-19 10:48:53.761601] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:21:06.232 [2024-11-19 10:48:53.761687] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.232 [2024-11-19 10:48:53.829829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.490 [2024-11-19 10:48:53.884643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.490 [2024-11-19 10:48:53.884695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.490 [2024-11-19 10:48:53.884723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.490 [2024-11-19 10:48:53.884733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.490 [2024-11-19 10:48:53.884743] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.490 [2024-11-19 10:48:53.885294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.490 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.490 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:06.490 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:06.490 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:06.490 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.490 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.490 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.QihNeMcnYO 00:21:06.490 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.QihNeMcnYO 00:21:06.490 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:06.748 [2024-11-19 10:48:54.341954] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.748 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:07.314 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:07.314 [2024-11-19 10:48:54.887423] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:07.314 [2024-11-19 10:48:54.887665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.314 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:07.572 malloc0 00:21:07.572 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:08.137 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.QihNeMcnYO 00:21:08.138 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:08.396 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1369506 00:21:08.396 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:08.396 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:08.396 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1369506 /var/tmp/bdevperf.sock 00:21:08.396 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1369506 ']' 00:21:08.396 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:08.396 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:08.396 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:08.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:08.396 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:08.396 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.654 [2024-11-19 10:48:56.041754] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:21:08.654 [2024-11-19 10:48:56.041832] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1369506 ] 00:21:08.654 [2024-11-19 10:48:56.107980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.654 [2024-11-19 10:48:56.164938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.654 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:08.654 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:08.654 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QihNeMcnYO 00:21:09.219 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:09.219 [2024-11-19 10:48:56.807184] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:09.477 nvme0n1 00:21:09.477 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:09.477 Running I/O for 1 seconds... 00:21:10.666 3365.00 IOPS, 13.14 MiB/s 00:21:10.666 Latency(us) 00:21:10.666 [2024-11-19T09:48:58.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.666 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:10.666 Verification LBA range: start 0x0 length 0x2000 00:21:10.666 nvme0n1 : 1.03 3384.35 13.22 0.00 0.00 37299.72 5631.24 51652.08 00:21:10.666 [2024-11-19T09:48:58.289Z] =================================================================================================================== 00:21:10.666 [2024-11-19T09:48:58.289Z] Total : 3384.35 13.22 0.00 0.00 37299.72 5631.24 51652.08 00:21:10.666 { 00:21:10.666 "results": [ 00:21:10.666 { 00:21:10.666 "job": "nvme0n1", 00:21:10.666 "core_mask": "0x2", 00:21:10.666 "workload": "verify", 00:21:10.666 "status": "finished", 00:21:10.666 "verify_range": { 00:21:10.666 "start": 0, 00:21:10.666 "length": 8192 00:21:10.666 }, 00:21:10.666 "queue_depth": 128, 00:21:10.666 "io_size": 4096, 00:21:10.666 "runtime": 1.032105, 00:21:10.666 "iops": 3384.345584993775, 00:21:10.666 "mibps": 13.220099941381934, 00:21:10.666 "io_failed": 0, 00:21:10.666 "io_timeout": 0, 00:21:10.666 "avg_latency_us": 37299.72361230397, 00:21:10.666 "min_latency_us": 5631.241481481481, 00:21:10.666 "max_latency_us": 51652.07703703704 00:21:10.666 } 00:21:10.666 ], 00:21:10.666 "core_count": 1 00:21:10.666 } 00:21:10.666 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1369506 00:21:10.666 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1369506 ']' 00:21:10.666 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1369506 00:21:10.666 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:10.666 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:10.666 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1369506 00:21:10.666 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:10.666 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:10.666 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1369506' 00:21:10.666 killing process with pid 1369506 00:21:10.666 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1369506 00:21:10.666 Received shutdown signal, test time was about 1.000000 seconds 00:21:10.666 00:21:10.666 Latency(us) 00:21:10.666 [2024-11-19T09:48:58.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.667 [2024-11-19T09:48:58.290Z] =================================================================================================================== 00:21:10.667 [2024-11-19T09:48:58.290Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:10.667 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1369506 00:21:10.924 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1369222 00:21:10.924 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1369222 ']' 00:21:10.924 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1369222 00:21:10.924 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:10.924 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:10.924 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1369222 00:21:10.924 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:10.924 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:10.924 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1369222' 00:21:10.924 killing process with pid 1369222 00:21:10.924 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1369222 00:21:10.924 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1369222 00:21:11.182 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:11.182 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:11.182 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:11.182 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.182 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1369793 00:21:11.182 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:11.182 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1369793 00:21:11.182 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1369793 ']' 00:21:11.182 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.182 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.182 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.182 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.182 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.182 [2024-11-19 10:48:58.652759] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:21:11.182 [2024-11-19 10:48:58.652864] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.182 [2024-11-19 10:48:58.722850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.182 [2024-11-19 10:48:58.776053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.182 [2024-11-19 10:48:58.776125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.182 [2024-11-19 10:48:58.776155] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.182 [2024-11-19 10:48:58.776167] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.182 [2024-11-19 10:48:58.776176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.182 [2024-11-19 10:48:58.776792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.439 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.440 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:11.440 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:11.440 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:11.440 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.440 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.440 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:11.440 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.440 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.440 [2024-11-19 10:48:58.918967] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.440 malloc0 00:21:11.440 [2024-11-19 10:48:58.950133] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:11.440 [2024-11-19 10:48:58.950425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.440 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.440 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1369923 00:21:11.440 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:11.440 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1369923 /var/tmp/bdevperf.sock 00:21:11.440 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1369923 ']' 00:21:11.440 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.440 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.440 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.440 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.440 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.440 [2024-11-19 10:48:59.021336] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:21:11.440 [2024-11-19 10:48:59.021415] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1369923 ] 00:21:11.700 [2024-11-19 10:48:59.086547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.700 [2024-11-19 10:48:59.143465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.700 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.700 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:11.700 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QihNeMcnYO 00:21:11.959 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:12.216 [2024-11-19 10:48:59.760716] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:12.216 nvme0n1 00:21:12.475 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:12.475 Running I/O for 1 seconds... 00:21:13.409 3536.00 IOPS, 13.81 MiB/s 00:21:13.409 Latency(us) 00:21:13.409 [2024-11-19T09:49:01.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.409 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:13.409 Verification LBA range: start 0x0 length 0x2000 00:21:13.409 nvme0n1 : 1.02 3589.12 14.02 0.00 0.00 35327.50 9029.40 30874.74 00:21:13.409 [2024-11-19T09:49:01.032Z] =================================================================================================================== 00:21:13.409 [2024-11-19T09:49:01.032Z] Total : 3589.12 14.02 0.00 0.00 35327.50 9029.40 30874.74 00:21:13.409 { 00:21:13.409 "results": [ 00:21:13.409 { 00:21:13.409 "job": "nvme0n1", 00:21:13.409 "core_mask": "0x2", 00:21:13.409 "workload": "verify", 00:21:13.409 "status": "finished", 00:21:13.409 "verify_range": { 00:21:13.409 "start": 0, 00:21:13.409 "length": 8192 00:21:13.409 }, 00:21:13.409 "queue_depth": 128, 00:21:13.409 "io_size": 4096, 00:21:13.409 "runtime": 1.020862, 00:21:13.409 "iops": 3589.1237013425907, 00:21:13.409 "mibps": 14.020014458369495, 00:21:13.409 "io_failed": 0, 00:21:13.409 "io_timeout": 0, 00:21:13.409 "avg_latency_us": 35327.49984150089, 00:21:13.409 "min_latency_us": 9029.404444444444, 00:21:13.409 "max_latency_us": 30874.737777777777 00:21:13.409 } 00:21:13.409 ], 00:21:13.409 "core_count": 1 00:21:13.409 } 00:21:13.409 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:13.409 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.409 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.668 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.668 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:13.668 "subsystems": [ 00:21:13.668 { 00:21:13.668 "subsystem": "keyring", 00:21:13.668 "config": [ 00:21:13.668 { 00:21:13.668 "method": "keyring_file_add_key", 00:21:13.668 "params": { 00:21:13.668 "name": "key0", 00:21:13.668 "path": "/tmp/tmp.QihNeMcnYO" 00:21:13.668 } 00:21:13.668 } 00:21:13.668 ] 00:21:13.668 }, 00:21:13.668 { 00:21:13.668 "subsystem": "iobuf", 00:21:13.668 "config": [ 00:21:13.668 { 00:21:13.668 "method": "iobuf_set_options", 00:21:13.668 "params": { 00:21:13.668 "small_pool_count": 8192, 00:21:13.668 "large_pool_count": 1024, 00:21:13.668 "small_bufsize": 8192, 00:21:13.668 "large_bufsize": 135168, 00:21:13.668 "enable_numa": false 00:21:13.668 } 00:21:13.668 } 00:21:13.668 ] 00:21:13.668 }, 00:21:13.668 { 00:21:13.668 "subsystem": "sock", 00:21:13.668 "config": [ 00:21:13.668 { 00:21:13.668 "method": "sock_set_default_impl", 00:21:13.668 "params": { 00:21:13.668 "impl_name": "posix" 00:21:13.668 } 00:21:13.668 }, 00:21:13.668 { 00:21:13.668 "method": "sock_impl_set_options", 00:21:13.668 "params": { 00:21:13.668 "impl_name": "ssl", 00:21:13.668 "recv_buf_size": 4096, 00:21:13.668 "send_buf_size": 4096, 00:21:13.668 "enable_recv_pipe": true, 00:21:13.668 "enable_quickack": false, 00:21:13.668 "enable_placement_id": 0, 00:21:13.668 "enable_zerocopy_send_server": true, 00:21:13.668 "enable_zerocopy_send_client": false, 00:21:13.668 "zerocopy_threshold": 0, 00:21:13.668 "tls_version": 0, 00:21:13.668 "enable_ktls": false 00:21:13.668 } 00:21:13.668 }, 00:21:13.668 { 00:21:13.668 "method": "sock_impl_set_options", 00:21:13.668 "params": { 00:21:13.668 "impl_name": "posix", 00:21:13.668 "recv_buf_size": 2097152, 00:21:13.668 "send_buf_size": 2097152, 00:21:13.668 "enable_recv_pipe": true, 00:21:13.668 "enable_quickack": false, 00:21:13.668 "enable_placement_id": 0, 00:21:13.668 "enable_zerocopy_send_server": true, 00:21:13.668 "enable_zerocopy_send_client": false, 00:21:13.668 "zerocopy_threshold": 0, 00:21:13.668 "tls_version": 0, 00:21:13.668 "enable_ktls": false 00:21:13.668 } 00:21:13.668 } 00:21:13.668 ] 00:21:13.668 }, 00:21:13.668 { 00:21:13.668 "subsystem": "vmd", 00:21:13.668 "config": [] 00:21:13.668 }, 00:21:13.668 { 00:21:13.668 "subsystem": "accel", 00:21:13.668 "config": [ 00:21:13.668 { 00:21:13.668 "method": "accel_set_options", 00:21:13.668 "params": { 00:21:13.668 "small_cache_size": 128, 00:21:13.668 "large_cache_size": 16, 00:21:13.668 "task_count": 2048, 00:21:13.668 "sequence_count": 2048, 00:21:13.668 "buf_count": 2048 00:21:13.668 } 00:21:13.668 } 00:21:13.668 ] 00:21:13.668 }, 00:21:13.668 { 00:21:13.668 "subsystem": "bdev", 00:21:13.668 "config": [ 00:21:13.668 { 00:21:13.668 "method": "bdev_set_options", 00:21:13.668 "params": { 00:21:13.668 "bdev_io_pool_size": 65535, 00:21:13.668 "bdev_io_cache_size": 256, 00:21:13.668 "bdev_auto_examine": true, 00:21:13.668 "iobuf_small_cache_size": 128, 00:21:13.668 "iobuf_large_cache_size": 16 00:21:13.668 } 00:21:13.668 }, 00:21:13.668 { 00:21:13.668 "method": "bdev_raid_set_options", 00:21:13.668 "params": { 00:21:13.668 "process_window_size_kb": 1024, 00:21:13.668 "process_max_bandwidth_mb_sec": 0 00:21:13.668 } 00:21:13.668 }, 00:21:13.668 { 00:21:13.668 "method": "bdev_iscsi_set_options", 00:21:13.668 "params": { 00:21:13.668 "timeout_sec": 30 00:21:13.668 } 00:21:13.668 }, 00:21:13.668 { 00:21:13.668 "method": "bdev_nvme_set_options", 00:21:13.668 "params": { 00:21:13.668 "action_on_timeout": "none", 00:21:13.668 "timeout_us": 0, 00:21:13.668 "timeout_admin_us": 0, 00:21:13.668 "keep_alive_timeout_ms": 10000, 00:21:13.668 "arbitration_burst": 0, 00:21:13.668 "low_priority_weight": 0, 00:21:13.668 "medium_priority_weight": 0, 00:21:13.668 "high_priority_weight": 0, 00:21:13.668 "nvme_adminq_poll_period_us": 10000, 00:21:13.668 "nvme_ioq_poll_period_us": 0, 00:21:13.668 "io_queue_requests": 0, 00:21:13.668 "delay_cmd_submit": true, 00:21:13.668 "transport_retry_count": 4, 00:21:13.668 "bdev_retry_count": 3, 00:21:13.668 "transport_ack_timeout": 0, 00:21:13.668 "ctrlr_loss_timeout_sec": 0, 00:21:13.668 "reconnect_delay_sec": 0, 00:21:13.668 "fast_io_fail_timeout_sec": 0, 00:21:13.668 "disable_auto_failback": false, 00:21:13.668 "generate_uuids": false, 00:21:13.668 "transport_tos": 0, 00:21:13.668 "nvme_error_stat": false, 00:21:13.668 "rdma_srq_size": 0, 00:21:13.668 "io_path_stat": false, 00:21:13.668 "allow_accel_sequence": false, 00:21:13.668 "rdma_max_cq_size": 0, 00:21:13.668 "rdma_cm_event_timeout_ms": 0, 00:21:13.668 "dhchap_digests": [ 00:21:13.668 "sha256", 00:21:13.668 "sha384", 00:21:13.668 "sha512" 00:21:13.668 ], 00:21:13.668 "dhchap_dhgroups": [ 00:21:13.668 "null", 00:21:13.668 "ffdhe2048", 00:21:13.668 "ffdhe3072", 00:21:13.668 "ffdhe4096", 00:21:13.668 "ffdhe6144", 00:21:13.668 "ffdhe8192" 00:21:13.668 ] 00:21:13.668 } 00:21:13.668 }, 00:21:13.668 { 00:21:13.668 "method": "bdev_nvme_set_hotplug", 00:21:13.668 "params": { 00:21:13.668 "period_us": 100000, 00:21:13.668 "enable": false 00:21:13.668 } 00:21:13.668 }, 00:21:13.668 { 00:21:13.668 "method": "bdev_malloc_create", 00:21:13.668 "params": { 00:21:13.668 "name": "malloc0", 00:21:13.668 "num_blocks": 8192, 00:21:13.668 "block_size": 4096, 00:21:13.668 "physical_block_size": 4096, 00:21:13.668 "uuid": "e109a6eb-2402-4420-aaa9-9ab4017be44e", 00:21:13.668 "optimal_io_boundary": 0, 00:21:13.668 "md_size": 0, 00:21:13.668 "dif_type": 0, 00:21:13.668 "dif_is_head_of_md": false, 00:21:13.668 "dif_pi_format": 0 00:21:13.668 } 00:21:13.669 }, 00:21:13.669 { 00:21:13.669 "method": "bdev_wait_for_examine" 00:21:13.669 } 00:21:13.669 ] 00:21:13.669 }, 00:21:13.669 { 00:21:13.669 "subsystem": "nbd", 00:21:13.669 "config": [] 00:21:13.669 }, 00:21:13.669 { 00:21:13.669 "subsystem": "scheduler", 00:21:13.669 "config": [ 00:21:13.669 { 00:21:13.669 "method": "framework_set_scheduler", 00:21:13.669 "params": { 00:21:13.669 "name": "static" 00:21:13.669 } 00:21:13.669 } 00:21:13.669 ] 00:21:13.669 }, 00:21:13.669 { 00:21:13.669 "subsystem": "nvmf", 00:21:13.669 "config": [ 00:21:13.669 { 00:21:13.669 "method": "nvmf_set_config", 00:21:13.669 "params": { 00:21:13.669 "discovery_filter": "match_any", 00:21:13.669 "admin_cmd_passthru": { 00:21:13.669 "identify_ctrlr": false 00:21:13.669 }, 00:21:13.669 "dhchap_digests": [ 00:21:13.669 "sha256", 00:21:13.669 "sha384", 00:21:13.669 "sha512" 00:21:13.669 ], 00:21:13.669 "dhchap_dhgroups": [ 00:21:13.669 "null", 00:21:13.669 "ffdhe2048", 00:21:13.669 "ffdhe3072", 00:21:13.669 "ffdhe4096", 00:21:13.669 "ffdhe6144", 00:21:13.669 "ffdhe8192" 00:21:13.669 ] 00:21:13.669 } 00:21:13.669 }, 00:21:13.669 { 00:21:13.669 "method": "nvmf_set_max_subsystems", 00:21:13.669 "params": { 00:21:13.669 "max_subsystems": 1024 00:21:13.669 } 00:21:13.669 }, 00:21:13.669 { 00:21:13.669 "method": "nvmf_set_crdt", 00:21:13.669 "params": { 00:21:13.669 "crdt1": 0, 00:21:13.669 "crdt2": 0, 00:21:13.669 "crdt3": 0 00:21:13.669 } 00:21:13.669 }, 00:21:13.669 { 00:21:13.669 "method": "nvmf_create_transport", 00:21:13.669 "params": { 00:21:13.669 "trtype": "TCP", 00:21:13.669 "max_queue_depth": 128, 00:21:13.669 "max_io_qpairs_per_ctrlr": 127, 00:21:13.669 "in_capsule_data_size": 4096, 00:21:13.669 "max_io_size": 131072, 00:21:13.669 "io_unit_size": 131072, 00:21:13.669 "max_aq_depth": 128, 00:21:13.669 "num_shared_buffers": 511, 00:21:13.669 "buf_cache_size": 4294967295, 00:21:13.669 "dif_insert_or_strip": false, 00:21:13.669 "zcopy": false, 00:21:13.669 "c2h_success": false, 00:21:13.669 "sock_priority": 0, 00:21:13.669 "abort_timeout_sec": 1, 00:21:13.669 "ack_timeout": 0, 00:21:13.669 "data_wr_pool_size": 0 00:21:13.669 } 00:21:13.669 }, 00:21:13.669 { 00:21:13.669 "method": "nvmf_create_subsystem", 00:21:13.669 "params": { 00:21:13.669 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.669 "allow_any_host": false, 00:21:13.669 "serial_number": "00000000000000000000", 00:21:13.669 "model_number": "SPDK bdev Controller", 00:21:13.669 "max_namespaces": 32, 00:21:13.669 "min_cntlid": 1, 00:21:13.669 "max_cntlid": 65519, 00:21:13.669 "ana_reporting": false 00:21:13.669 } 00:21:13.669 }, 00:21:13.669 { 00:21:13.669 "method": "nvmf_subsystem_add_host", 00:21:13.669 "params": { 00:21:13.669 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.669 "host": "nqn.2016-06.io.spdk:host1", 00:21:13.669 "psk": "key0" 00:21:13.669 } 00:21:13.669 }, 00:21:13.669 { 00:21:13.669 "method": "nvmf_subsystem_add_ns", 00:21:13.669 "params": { 00:21:13.669 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.669 "namespace": { 00:21:13.669 "nsid": 1, 00:21:13.669 "bdev_name": "malloc0", 00:21:13.669 "nguid": "E109A6EB24024420AAA99AB4017BE44E", 00:21:13.669 "uuid": "e109a6eb-2402-4420-aaa9-9ab4017be44e", 00:21:13.669 "no_auto_visible": false 00:21:13.669 } 00:21:13.669 } 00:21:13.669 }, 00:21:13.669 { 00:21:13.669 "method": "nvmf_subsystem_add_listener", 00:21:13.669 "params": { 00:21:13.669 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.669 "listen_address": { 00:21:13.669 "trtype": "TCP", 00:21:13.669 "adrfam": "IPv4", 00:21:13.669 "traddr": "10.0.0.2", 00:21:13.669 "trsvcid": "4420" 00:21:13.669 }, 00:21:13.669 "secure_channel": false, 00:21:13.669 "sock_impl": "ssl" 00:21:13.669 } 00:21:13.669 } 00:21:13.669 ] 00:21:13.669 } 00:21:13.669 ] 00:21:13.669 }' 00:21:13.669 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:13.928 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:13.928 "subsystems": [ 00:21:13.928 { 00:21:13.928 "subsystem": "keyring", 00:21:13.928 "config": [ 00:21:13.928 { 00:21:13.928 "method": "keyring_file_add_key", 00:21:13.928 "params": { 00:21:13.928 "name": "key0", 00:21:13.928 "path": "/tmp/tmp.QihNeMcnYO" 00:21:13.928 } 00:21:13.928 } 00:21:13.928 ] 00:21:13.928 }, 00:21:13.928 { 00:21:13.928 "subsystem": "iobuf", 00:21:13.928 "config": [ 00:21:13.928 { 00:21:13.928 "method": "iobuf_set_options", 00:21:13.928 "params": { 00:21:13.928 "small_pool_count": 8192, 00:21:13.928 "large_pool_count": 1024, 00:21:13.928 "small_bufsize": 8192, 00:21:13.928 "large_bufsize": 135168, 00:21:13.928 "enable_numa": false 00:21:13.928 } 00:21:13.928 } 00:21:13.928 ] 00:21:13.928 }, 00:21:13.928 { 00:21:13.928 "subsystem": "sock", 00:21:13.928 "config": [ 00:21:13.928 { 00:21:13.928 "method": "sock_set_default_impl", 00:21:13.928 "params": { 00:21:13.928 "impl_name": "posix" 00:21:13.928 } 00:21:13.928 }, 00:21:13.928 { 00:21:13.928 "method": "sock_impl_set_options", 00:21:13.928 "params": { 00:21:13.928 "impl_name": "ssl", 00:21:13.928 "recv_buf_size": 4096, 00:21:13.928 "send_buf_size": 4096, 00:21:13.928 "enable_recv_pipe": true, 00:21:13.928 "enable_quickack": false, 00:21:13.928 "enable_placement_id": 0, 00:21:13.928 "enable_zerocopy_send_server": true, 00:21:13.928 "enable_zerocopy_send_client": false, 00:21:13.928 "zerocopy_threshold": 0, 00:21:13.928 "tls_version": 0, 00:21:13.928 "enable_ktls": false 00:21:13.928 } 00:21:13.928 }, 00:21:13.928 { 00:21:13.928 "method": "sock_impl_set_options", 00:21:13.928 "params": { 00:21:13.928 "impl_name": "posix", 00:21:13.928 "recv_buf_size": 2097152, 00:21:13.928 "send_buf_size": 2097152, 00:21:13.928 "enable_recv_pipe": true, 00:21:13.928 "enable_quickack": false, 00:21:13.928 "enable_placement_id": 0, 00:21:13.928 "enable_zerocopy_send_server": true, 00:21:13.928 "enable_zerocopy_send_client": false, 00:21:13.928 "zerocopy_threshold": 0, 00:21:13.928 "tls_version": 0, 00:21:13.928 "enable_ktls": false 00:21:13.928 } 00:21:13.928 } 00:21:13.928 ] 00:21:13.928 }, 00:21:13.928 { 00:21:13.928 "subsystem": "vmd", 00:21:13.928 "config": [] 00:21:13.928 }, 00:21:13.928 { 00:21:13.928 "subsystem": "accel", 00:21:13.928 "config": [ 00:21:13.928 { 00:21:13.928 "method": "accel_set_options", 00:21:13.928 "params": { 00:21:13.928 "small_cache_size": 128, 00:21:13.928 "large_cache_size": 16, 00:21:13.928 "task_count": 2048, 00:21:13.928 "sequence_count": 2048, 00:21:13.928 "buf_count": 2048 00:21:13.928 } 00:21:13.928 } 00:21:13.928 ] 00:21:13.928 }, 00:21:13.928 { 00:21:13.928 "subsystem": "bdev", 00:21:13.928 "config": [ 00:21:13.928 { 00:21:13.928 "method": "bdev_set_options", 00:21:13.928 "params": { 00:21:13.928 "bdev_io_pool_size": 65535, 00:21:13.928 "bdev_io_cache_size": 256, 00:21:13.928 "bdev_auto_examine": true, 00:21:13.928 "iobuf_small_cache_size": 128, 00:21:13.928 "iobuf_large_cache_size": 16 00:21:13.928 } 00:21:13.928 }, 00:21:13.928 { 00:21:13.928 "method": "bdev_raid_set_options", 00:21:13.928 "params": { 00:21:13.928 "process_window_size_kb": 1024, 00:21:13.928 "process_max_bandwidth_mb_sec": 0 00:21:13.928 } 00:21:13.928 }, 00:21:13.928 { 00:21:13.928 "method": "bdev_iscsi_set_options", 00:21:13.928 "params": { 00:21:13.928 "timeout_sec": 30 00:21:13.928 } 00:21:13.928 }, 00:21:13.928 { 00:21:13.928 "method": "bdev_nvme_set_options", 00:21:13.928 "params": { 00:21:13.928 "action_on_timeout": "none", 00:21:13.928 "timeout_us": 0, 00:21:13.928 "timeout_admin_us": 0, 00:21:13.928 "keep_alive_timeout_ms": 10000, 00:21:13.928 "arbitration_burst": 0, 00:21:13.928 "low_priority_weight": 0, 00:21:13.928 "medium_priority_weight": 0, 00:21:13.928 "high_priority_weight": 0, 00:21:13.928 "nvme_adminq_poll_period_us": 10000, 00:21:13.928 "nvme_ioq_poll_period_us": 0, 00:21:13.928 "io_queue_requests": 512, 00:21:13.928 "delay_cmd_submit": true, 00:21:13.928 "transport_retry_count": 4, 00:21:13.928 "bdev_retry_count": 3, 00:21:13.928 "transport_ack_timeout": 0, 00:21:13.928 "ctrlr_loss_timeout_sec": 0, 00:21:13.928 "reconnect_delay_sec": 0, 00:21:13.928 "fast_io_fail_timeout_sec": 0, 00:21:13.928 "disable_auto_failback": false, 00:21:13.928 "generate_uuids": false, 00:21:13.928 "transport_tos": 0, 00:21:13.928 "nvme_error_stat": false, 00:21:13.928 "rdma_srq_size": 0, 00:21:13.928 "io_path_stat": false, 00:21:13.928 "allow_accel_sequence": false, 00:21:13.928 "rdma_max_cq_size": 0, 00:21:13.928 "rdma_cm_event_timeout_ms": 0, 00:21:13.928 "dhchap_digests": [ 00:21:13.928 "sha256", 00:21:13.928 "sha384", 00:21:13.928 "sha512" 00:21:13.928 ], 00:21:13.928 "dhchap_dhgroups": [ 00:21:13.928 "null", 00:21:13.928 "ffdhe2048", 00:21:13.928 "ffdhe3072", 00:21:13.929 "ffdhe4096", 00:21:13.929 "ffdhe6144", 00:21:13.929 "ffdhe8192" 00:21:13.929 ] 00:21:13.929 } 00:21:13.929 }, 00:21:13.929 { 00:21:13.929 "method": "bdev_nvme_attach_controller", 00:21:13.929 "params": { 00:21:13.929 "name": "nvme0", 00:21:13.929 "trtype": "TCP", 00:21:13.929 "adrfam": "IPv4", 00:21:13.929 "traddr": "10.0.0.2", 00:21:13.929 "trsvcid": "4420", 00:21:13.929 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.929 "prchk_reftag": false, 00:21:13.929 "prchk_guard": false, 00:21:13.929 "ctrlr_loss_timeout_sec": 0, 00:21:13.929 "reconnect_delay_sec": 0, 00:21:13.929 "fast_io_fail_timeout_sec": 0, 00:21:13.929 "psk": "key0", 00:21:13.929 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:13.929 "hdgst": false, 00:21:13.929 "ddgst": false, 00:21:13.929 "multipath": "multipath" 00:21:13.929 } 00:21:13.929 }, 00:21:13.929 { 00:21:13.929 "method": "bdev_nvme_set_hotplug", 00:21:13.929 "params": { 00:21:13.929 "period_us": 100000, 00:21:13.929 "enable": false 00:21:13.929 } 00:21:13.929 }, 00:21:13.929 { 00:21:13.929 "method": "bdev_enable_histogram", 00:21:13.929 "params": { 00:21:13.929 "name": "nvme0n1", 00:21:13.929 "enable": true 00:21:13.929 } 00:21:13.929 }, 00:21:13.929 { 00:21:13.929 "method": "bdev_wait_for_examine" 00:21:13.929 } 00:21:13.929 ] 00:21:13.929 }, 00:21:13.929 { 00:21:13.929 "subsystem": "nbd", 00:21:13.929 "config": [] 00:21:13.929 } 00:21:13.929 ] 00:21:13.929 }' 00:21:13.929 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1369923 00:21:13.929 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1369923 ']' 00:21:13.929 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1369923 00:21:13.929 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:13.929 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.929 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1369923 00:21:13.929 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:13.929 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:13.929 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1369923' 00:21:13.929 killing process with pid 1369923 00:21:13.929 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1369923 00:21:13.929 Received shutdown signal, test time was about 1.000000 seconds 00:21:13.929 00:21:13.929 Latency(us) 00:21:13.929 [2024-11-19T09:49:01.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.929 [2024-11-19T09:49:01.552Z] =================================================================================================================== 00:21:13.929 [2024-11-19T09:49:01.552Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:13.929 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1369923 00:21:14.187 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1369793 00:21:14.187 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1369793 ']' 00:21:14.187 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1369793 00:21:14.187 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:14.187 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.187 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1369793 00:21:14.187 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:14.187 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:14.187 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1369793' 00:21:14.187 killing process with pid 1369793 00:21:14.187 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1369793 00:21:14.187 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1369793 00:21:14.446 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:14.446 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:14.446 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:14.446 "subsystems": [ 00:21:14.446 { 00:21:14.446 "subsystem": "keyring", 00:21:14.446 "config": [ 00:21:14.446 { 00:21:14.446 "method": "keyring_file_add_key", 00:21:14.446 "params": { 00:21:14.446 "name": "key0", 00:21:14.446 "path": "/tmp/tmp.QihNeMcnYO" 00:21:14.446 } 00:21:14.446 } 00:21:14.446 ] 00:21:14.446 }, 00:21:14.446 { 00:21:14.446 "subsystem": "iobuf", 00:21:14.446 "config": [ 00:21:14.446 { 00:21:14.446 "method": "iobuf_set_options", 00:21:14.446 "params": { 00:21:14.446 "small_pool_count": 8192, 00:21:14.446 "large_pool_count": 1024, 00:21:14.446 "small_bufsize": 8192, 00:21:14.446 "large_bufsize": 135168, 00:21:14.446 "enable_numa": false 00:21:14.446 } 00:21:14.446 } 00:21:14.446 ] 00:21:14.446 }, 00:21:14.446 { 00:21:14.446 "subsystem": "sock", 00:21:14.446 "config": [ 00:21:14.446 { 00:21:14.446 "method": "sock_set_default_impl", 00:21:14.446 "params": { 00:21:14.446 "impl_name": "posix" 00:21:14.446 } 00:21:14.446 }, 00:21:14.446 { 00:21:14.446 "method": "sock_impl_set_options", 00:21:14.446 "params": { 00:21:14.446 "impl_name": "ssl", 00:21:14.446 "recv_buf_size": 4096, 00:21:14.446 "send_buf_size": 4096, 00:21:14.446 "enable_recv_pipe": true, 00:21:14.446 "enable_quickack": false, 00:21:14.446 "enable_placement_id": 0, 00:21:14.446 "enable_zerocopy_send_server": true, 00:21:14.446 "enable_zerocopy_send_client": false, 00:21:14.446 "zerocopy_threshold": 0, 00:21:14.446 "tls_version": 0, 00:21:14.446 "enable_ktls": false 00:21:14.446 } 00:21:14.446 }, 00:21:14.446 { 00:21:14.446 "method": "sock_impl_set_options", 00:21:14.446 "params": { 00:21:14.446 "impl_name": "posix", 00:21:14.446 "recv_buf_size": 2097152, 00:21:14.446 "send_buf_size": 2097152, 00:21:14.446 "enable_recv_pipe": true, 00:21:14.446 "enable_quickack": false, 00:21:14.446 "enable_placement_id": 0, 00:21:14.446 "enable_zerocopy_send_server": true, 00:21:14.446 "enable_zerocopy_send_client": false, 00:21:14.446 "zerocopy_threshold": 0, 00:21:14.446 "tls_version": 0, 00:21:14.446 "enable_ktls": false 00:21:14.446 } 00:21:14.446 } 00:21:14.446 ] 00:21:14.446 }, 00:21:14.446 { 00:21:14.446 "subsystem": "vmd", 00:21:14.446 "config": [] 00:21:14.446 }, 00:21:14.446 { 00:21:14.446 "subsystem": "accel", 00:21:14.446 "config": [ 00:21:14.446 { 00:21:14.446 "method": "accel_set_options", 00:21:14.446 "params": { 00:21:14.446 "small_cache_size": 128, 00:21:14.446 "large_cache_size": 16, 00:21:14.446 "task_count": 2048, 00:21:14.446 "sequence_count": 2048, 00:21:14.446 "buf_count": 2048 00:21:14.446 } 00:21:14.446 } 00:21:14.446 ] 00:21:14.446 }, 00:21:14.446 { 00:21:14.446 "subsystem": "bdev", 00:21:14.446 "config": [ 00:21:14.446 { 00:21:14.446 "method": "bdev_set_options", 00:21:14.446 "params": { 00:21:14.446 "bdev_io_pool_size": 65535, 00:21:14.446 "bdev_io_cache_size": 256, 00:21:14.446 "bdev_auto_examine": true, 00:21:14.446 "iobuf_small_cache_size": 128, 00:21:14.446 "iobuf_large_cache_size": 16 00:21:14.446 } 00:21:14.446 }, 00:21:14.446 { 00:21:14.446 "method": "bdev_raid_set_options", 00:21:14.446 "params": { 00:21:14.446 "process_window_size_kb": 1024, 00:21:14.446 "process_max_bandwidth_mb_sec": 0 00:21:14.446 } 00:21:14.446 }, 00:21:14.446 { 00:21:14.446 "method": "bdev_iscsi_set_options", 00:21:14.446 "params": { 00:21:14.446 "timeout_sec": 30 00:21:14.446 } 00:21:14.446 }, 00:21:14.446 { 00:21:14.446 "method": "bdev_nvme_set_options", 00:21:14.446 "params": { 00:21:14.446 "action_on_timeout": "none", 00:21:14.446 "timeout_us": 0, 00:21:14.446 "timeout_admin_us": 0, 00:21:14.446 "keep_alive_timeout_ms": 10000, 00:21:14.446 "arbitration_burst": 0, 00:21:14.446 "low_priority_weight": 0, 00:21:14.446 "medium_priority_weight": 0, 00:21:14.446 "high_priority_weight": 0, 00:21:14.446 "nvme_adminq_poll_period_us": 10000, 00:21:14.446 "nvme_ioq_poll_period_us": 0, 00:21:14.446 "io_queue_requests": 0, 00:21:14.446 "delay_cmd_submit": true, 00:21:14.446 "transport_retry_count": 4, 00:21:14.446 "bdev_retry_count": 3, 00:21:14.446 "transport_ack_timeout": 0, 00:21:14.446 "ctrlr_loss_timeout_sec": 0, 00:21:14.447 "reconnect_delay_sec": 0, 00:21:14.447 "fast_io_fail_timeout_sec": 0, 00:21:14.447 "disable_auto_failback": false, 00:21:14.447 "generate_uuids": false, 00:21:14.447 "transport_tos": 0, 00:21:14.447 "nvme_error_stat": false, 00:21:14.447 "rdma_srq_size": 0, 00:21:14.447 "io_path_stat": false, 00:21:14.447 "allow_accel_sequence": false, 00:21:14.447 "rdma_max_cq_size": 0, 00:21:14.447 "rdma_cm_event_timeout_ms": 0, 00:21:14.447 "dhchap_digests": [ 00:21:14.447 "sha256", 00:21:14.447 "sha384", 00:21:14.447 "sha512" 00:21:14.447 ], 00:21:14.447 "dhchap_dhgroups": [ 00:21:14.447 "null", 00:21:14.447 "ffdhe2048", 00:21:14.447 "ffdhe3072", 00:21:14.447 "ffdhe4096", 00:21:14.447 "ffdhe6144", 00:21:14.447 "ffdhe8192" 00:21:14.447 ] 00:21:14.447 } 00:21:14.447 }, 00:21:14.447 { 00:21:14.447 "method": "bdev_nvme_set_hotplug", 00:21:14.447 "params": { 00:21:14.447 "period_us": 100000, 00:21:14.447 "enable": false 00:21:14.447 } 00:21:14.447 }, 00:21:14.447 { 00:21:14.447 "method": "bdev_malloc_create", 00:21:14.447 "params": { 00:21:14.447 "name": "malloc0", 00:21:14.447 "num_blocks": 8192, 00:21:14.447 "block_size": 4096, 00:21:14.447 "physical_block_size": 4096, 00:21:14.447 "uuid": "e109a6eb-2402-4420-aaa9-9ab4017be44e", 00:21:14.447 "optimal_io_boundary": 0, 00:21:14.447 "md_size": 0, 00:21:14.447 "dif_type": 0, 00:21:14.447 "dif_is_head_of_md": false, 00:21:14.447 "dif_pi_format": 0 00:21:14.447 } 00:21:14.447 }, 00:21:14.447 { 00:21:14.447 "method": "bdev_wait_for_examine" 00:21:14.447 } 00:21:14.447 ] 00:21:14.447 }, 00:21:14.447 { 00:21:14.447 "subsystem": "nbd", 00:21:14.447 "config": [] 00:21:14.447 }, 00:21:14.447 { 00:21:14.447 "subsystem": "scheduler", 00:21:14.447 "config": [ 00:21:14.447 { 00:21:14.447 "method": "framework_set_scheduler", 00:21:14.447 "params": { 00:21:14.447 "name": "static" 00:21:14.447 } 00:21:14.447 } 00:21:14.447 ] 00:21:14.447 }, 00:21:14.447 { 00:21:14.447 "subsystem": "nvmf", 00:21:14.447 "config": [ 00:21:14.447 { 00:21:14.447 "method": "nvmf_set_config", 00:21:14.447 "params": { 00:21:14.447 "discovery_filter": "match_any", 00:21:14.447 "admin_cmd_passthru": { 00:21:14.447 "identify_ctrlr": false 00:21:14.447 }, 00:21:14.447 "dhchap_digests": [ 00:21:14.447 "sha256", 00:21:14.447 "sha384", 00:21:14.447 "sha512" 00:21:14.447 ], 00:21:14.447 "dhchap_dhgroups": [ 00:21:14.447 "null", 00:21:14.447 "ffdhe2048", 00:21:14.447 "ffdhe3072", 00:21:14.447 "ffdhe4096", 00:21:14.447 "ffdhe6144", 00:21:14.447 "ffdhe8192" 00:21:14.447 ] 00:21:14.447 } 00:21:14.447 }, 00:21:14.447 { 00:21:14.447 "method": "nvmf_set_max_subsystems", 00:21:14.447 "params": { 00:21:14.447 "max_subsystems": 1024 00:21:14.447 } 00:21:14.447 }, 00:21:14.447 { 00:21:14.447 "method": "nvmf_set_crdt", 00:21:14.447 "params": { 00:21:14.447 "crdt1": 0, 00:21:14.447 "crdt2": 0, 00:21:14.447 "crdt3": 0 00:21:14.447 } 00:21:14.447 }, 00:21:14.447 { 00:21:14.447 "method": "nvmf_create_transport", 00:21:14.447 "params": { 00:21:14.447 "trtype": "TCP", 00:21:14.447 "max_queue_depth": 128, 00:21:14.447 "max_io_qpairs_per_ctrlr": 127, 00:21:14.447 "in_capsule_data_size": 4096, 00:21:14.447 "max_io_size": 131072, 00:21:14.447 "io_unit_size": 131072, 00:21:14.447 "max_aq_depth": 128, 00:21:14.447 "num_shared_buffers": 511, 00:21:14.447 "buf_cache_size": 4294967295, 00:21:14.447 "dif_insert_or_strip": false, 00:21:14.447 "zcopy": false, 00:21:14.447 "c2h_success": false, 00:21:14.447 "sock_priority": 0, 00:21:14.447 "abort_timeout_sec": 1, 00:21:14.447 "ack_timeout": 0, 00:21:14.447 "data_wr_pool_size": 0 00:21:14.447 } 00:21:14.447 }, 00:21:14.447 { 00:21:14.447 "method": "nvmf_create_subsystem", 00:21:14.447 "params": { 00:21:14.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.447 "allow_any_host": false, 00:21:14.447 "serial_number": "00000000000000000000", 00:21:14.447 "model_number": "SPDK bdev Controller", 00:21:14.447 "max_namespaces": 32, 00:21:14.447 "min_cntlid": 1, 00:21:14.447 "max_cntlid": 65519, 00:21:14.447 "ana_reporting": false 00:21:14.447 } 00:21:14.447 }, 00:21:14.447 { 00:21:14.447 "method": "nvmf_subsystem_add_host", 00:21:14.447 "params": { 00:21:14.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.447 "host": "nqn.2016-06.io.spdk:host1", 00:21:14.447 "psk": "key0" 00:21:14.447 } 00:21:14.447 }, 00:21:14.447 { 00:21:14.447 "method": "nvmf_subsystem_add_ns", 00:21:14.447 "params": { 00:21:14.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.447 "namespace": { 00:21:14.447 "nsid": 1, 00:21:14.447 "bdev_name": "malloc0", 00:21:14.447 "nguid": "E109A6EB24024420AAA99AB4017BE44E", 00:21:14.447 "uuid": "e109a6eb-2402-4420-aaa9-9ab4017be44e", 00:21:14.447 "no_auto_visible": false 00:21:14.447 } 00:21:14.447 } 00:21:14.447 }, 00:21:14.447 { 00:21:14.447 "method": "nvmf_subsystem_add_listener", 00:21:14.447 "params": { 00:21:14.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.447 "listen_address": { 00:21:14.447 "trtype": "TCP", 00:21:14.447 "adrfam": "IPv4", 00:21:14.447 "traddr": "10.0.0.2", 00:21:14.447 "trsvcid": "4420" 00:21:14.447 }, 00:21:14.447 "secure_channel": false, 00:21:14.447 "sock_impl": "ssl" 00:21:14.447 } 00:21:14.447 } 00:21:14.447 ] 00:21:14.447 } 00:21:14.447 ] 00:21:14.447 }' 00:21:14.447 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:14.447 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.447 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1370222 00:21:14.447 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:14.447 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1370222 00:21:14.447 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1370222 ']' 00:21:14.447 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.447 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.447 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.447 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.447 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.447 [2024-11-19 10:49:02.015016] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:21:14.447 [2024-11-19 10:49:02.015119] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.706 [2024-11-19 10:49:02.087021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.706 [2024-11-19 10:49:02.143414] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.706 [2024-11-19 10:49:02.143465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.706 [2024-11-19 10:49:02.143480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.706 [2024-11-19 10:49:02.143491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.706 [2024-11-19 10:49:02.143501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.706 [2024-11-19 10:49:02.144108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.964 [2024-11-19 10:49:02.384005] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.964 [2024-11-19 10:49:02.416039] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:14.964 [2024-11-19 10:49:02.416269] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.530 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:15.530 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:15.530 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:15.530 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:15.530 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.530 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.530 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1370376 00:21:15.530 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1370376 /var/tmp/bdevperf.sock 00:21:15.530 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1370376 ']' 00:21:15.530 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:15.530 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:15.530 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.530 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:15.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:15.530 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:15.530 "subsystems": [ 00:21:15.530 { 00:21:15.530 "subsystem": "keyring", 00:21:15.530 "config": [ 00:21:15.530 { 00:21:15.530 "method": "keyring_file_add_key", 00:21:15.530 "params": { 00:21:15.530 "name": "key0", 00:21:15.530 "path": "/tmp/tmp.QihNeMcnYO" 00:21:15.530 } 00:21:15.530 } 00:21:15.530 ] 00:21:15.530 }, 00:21:15.530 { 00:21:15.530 "subsystem": "iobuf", 00:21:15.530 "config": [ 00:21:15.530 { 00:21:15.530 "method": "iobuf_set_options", 00:21:15.530 "params": { 00:21:15.530 "small_pool_count": 8192, 00:21:15.530 "large_pool_count": 1024, 00:21:15.530 "small_bufsize": 8192, 00:21:15.530 "large_bufsize": 135168, 00:21:15.530 "enable_numa": false 00:21:15.530 } 00:21:15.530 } 00:21:15.530 ] 00:21:15.530 }, 00:21:15.530 { 00:21:15.530 "subsystem": "sock", 00:21:15.530 "config": [ 00:21:15.530 { 00:21:15.530 "method": "sock_set_default_impl", 00:21:15.530 "params": { 00:21:15.530 "impl_name": "posix" 00:21:15.530 } 00:21:15.530 }, 00:21:15.530 { 00:21:15.530 "method": "sock_impl_set_options", 00:21:15.530 "params": { 00:21:15.530 "impl_name": "ssl", 00:21:15.530 "recv_buf_size": 4096, 00:21:15.530 "send_buf_size": 4096, 00:21:15.530 "enable_recv_pipe": true, 00:21:15.530 "enable_quickack": false, 00:21:15.530 "enable_placement_id": 0, 00:21:15.530 "enable_zerocopy_send_server": true, 00:21:15.530 "enable_zerocopy_send_client": false, 00:21:15.530 "zerocopy_threshold": 0, 00:21:15.530 "tls_version": 0, 00:21:15.530 "enable_ktls": false 00:21:15.530 } 00:21:15.530 }, 00:21:15.530 { 00:21:15.531 "method": "sock_impl_set_options", 00:21:15.531 "params": { 00:21:15.531 "impl_name": "posix", 00:21:15.531 "recv_buf_size": 2097152, 00:21:15.531 "send_buf_size": 2097152, 00:21:15.531 "enable_recv_pipe": true, 00:21:15.531 "enable_quickack": false, 00:21:15.531 "enable_placement_id": 0, 00:21:15.531 "enable_zerocopy_send_server": true, 00:21:15.531 "enable_zerocopy_send_client": false, 00:21:15.531 "zerocopy_threshold": 0, 00:21:15.531 "tls_version": 0, 00:21:15.531 "enable_ktls": false 00:21:15.531 } 00:21:15.531 } 00:21:15.531 ] 00:21:15.531 }, 00:21:15.531 { 00:21:15.531 "subsystem": "vmd", 00:21:15.531 "config": [] 00:21:15.531 }, 00:21:15.531 { 00:21:15.531 "subsystem": "accel", 00:21:15.531 "config": [ 00:21:15.531 { 00:21:15.531 "method": "accel_set_options", 00:21:15.531 "params": { 00:21:15.531 "small_cache_size": 128, 00:21:15.531 "large_cache_size": 16, 00:21:15.531 "task_count": 2048, 00:21:15.531 "sequence_count": 2048, 00:21:15.531 "buf_count": 2048 00:21:15.531 } 00:21:15.531 } 00:21:15.531 ] 00:21:15.531 }, 00:21:15.531 { 00:21:15.531 "subsystem": "bdev", 00:21:15.531 "config": [ 00:21:15.531 { 00:21:15.531 "method": "bdev_set_options", 00:21:15.531 "params": { 00:21:15.531 "bdev_io_pool_size": 65535, 00:21:15.531 "bdev_io_cache_size": 256, 00:21:15.531 "bdev_auto_examine": true, 00:21:15.531 "iobuf_small_cache_size": 128, 00:21:15.531 "iobuf_large_cache_size": 16 00:21:15.531 } 00:21:15.531 }, 00:21:15.531 { 00:21:15.531 "method": "bdev_raid_set_options", 00:21:15.531 "params": { 00:21:15.531 "process_window_size_kb": 1024, 00:21:15.531 "process_max_bandwidth_mb_sec": 0 00:21:15.531 } 00:21:15.531 }, 00:21:15.531 { 00:21:15.531 "method": "bdev_iscsi_set_options", 00:21:15.531 "params": { 00:21:15.531 "timeout_sec": 30 00:21:15.531 } 00:21:15.531 }, 00:21:15.531 { 00:21:15.531 "method": "bdev_nvme_set_options", 00:21:15.531 "params": { 00:21:15.531 "action_on_timeout": "none", 00:21:15.531 "timeout_us": 0, 00:21:15.531 "timeout_admin_us": 0, 00:21:15.531 "keep_alive_timeout_ms": 10000, 00:21:15.531 "arbitration_burst": 0, 00:21:15.531 "low_priority_weight": 0, 00:21:15.531 "medium_priority_weight": 0, 00:21:15.531 "high_priority_weight": 0, 00:21:15.531 "nvme_adminq_poll_period_us": 10000, 00:21:15.531 "nvme_ioq_poll_period_us": 0, 00:21:15.531 "io_queue_requests": 512, 00:21:15.531 "delay_cmd_submit": true, 00:21:15.531 "transport_retry_count": 4, 00:21:15.531 "bdev_retry_count": 3, 00:21:15.531 "transport_ack_timeout": 0, 00:21:15.531 "ctrlr_loss_timeout_sec": 0, 00:21:15.531 "reconnect_delay_sec": 0, 00:21:15.531 "fast_io_fail_timeout_sec": 0, 00:21:15.531 "disable_auto_failback": false, 00:21:15.531 "generate_uuids": false, 00:21:15.531 "transport_tos": 0, 00:21:15.531 "nvme_error_stat": false, 00:21:15.531 "rdma_srq_size": 0, 00:21:15.531 "io_path_stat": false, 00:21:15.531 "allow_accel_sequence": false, 00:21:15.531 "rdma_max_cq_size": 0, 00:21:15.531 "rdma_cm_event_timeout_ms": 0, 00:21:15.531 "dhchap_digests": [ 00:21:15.531 "sha256", 00:21:15.531 "sha384", 00:21:15.531 "sha512" 00:21:15.531 ], 00:21:15.531 "dhchap_dhgroups": [ 00:21:15.531 "null", 00:21:15.531 "ffdhe2048", 00:21:15.531 "ffdhe3072", 00:21:15.531 "ffdhe4096", 00:21:15.531 "ffdhe6144", 00:21:15.531 "ffdhe8192" 00:21:15.531 ] 00:21:15.531 } 00:21:15.531 }, 00:21:15.531 { 00:21:15.531 "method": "bdev_nvme_attach_controller", 00:21:15.531 "params": { 00:21:15.531 "name": "nvme0", 00:21:15.531 "trtype": "TCP", 00:21:15.531 "adrfam": "IPv4", 00:21:15.531 "traddr": "10.0.0.2", 00:21:15.531 "trsvcid": "4420", 00:21:15.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.531 "prchk_reftag": false, 00:21:15.531 "prchk_guard": false, 00:21:15.531 "ctrlr_loss_timeout_sec": 0, 00:21:15.531 "reconnect_delay_sec": 0, 00:21:15.531 "fast_io_fail_timeout_sec": 0, 00:21:15.531 "psk": "key0", 00:21:15.531 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:15.531 "hdgst": false, 00:21:15.531 "ddgst": false, 00:21:15.531 "multipath": "multipath" 00:21:15.531 } 00:21:15.531 }, 00:21:15.531 { 00:21:15.531 "method": "bdev_nvme_set_hotplug", 00:21:15.531 "params": { 00:21:15.531 "period_us": 100000, 00:21:15.531 "enable": false 00:21:15.531 } 00:21:15.531 }, 00:21:15.531 { 00:21:15.531 "method": "bdev_enable_histogram", 00:21:15.531 "params": { 00:21:15.531 "name": "nvme0n1", 00:21:15.531 "enable": true 00:21:15.531 } 00:21:15.531 }, 00:21:15.531 { 00:21:15.531 "method": "bdev_wait_for_examine" 00:21:15.531 } 00:21:15.531 ] 00:21:15.531 }, 00:21:15.531 { 00:21:15.531 "subsystem": "nbd", 00:21:15.531 "config": [] 00:21:15.531 } 00:21:15.531 ] 00:21:15.531 }' 00:21:15.531 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.531 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.531 [2024-11-19 10:49:03.131932] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:21:15.531 [2024-11-19 10:49:03.132014] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1370376 ] 00:21:15.790 [2024-11-19 10:49:03.199786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.790 [2024-11-19 10:49:03.257927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.048 [2024-11-19 10:49:03.438809] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:16.048 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.048 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:16.048 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:16.048 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:16.306 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.306 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:16.563 Running I/O for 1 seconds... 00:21:17.497 3553.00 IOPS, 13.88 MiB/s 00:21:17.497 Latency(us) 00:21:17.497 [2024-11-19T09:49:05.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.497 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:17.497 Verification LBA range: start 0x0 length 0x2000 00:21:17.497 nvme0n1 : 1.02 3614.88 14.12 0.00 0.00 35097.72 6165.24 42913.94 00:21:17.497 [2024-11-19T09:49:05.120Z] =================================================================================================================== 00:21:17.497 [2024-11-19T09:49:05.120Z] Total : 3614.88 14.12 0.00 0.00 35097.72 6165.24 42913.94 00:21:17.497 { 00:21:17.497 "results": [ 00:21:17.497 { 00:21:17.497 "job": "nvme0n1", 00:21:17.497 "core_mask": "0x2", 00:21:17.497 "workload": "verify", 00:21:17.497 "status": "finished", 00:21:17.497 "verify_range": { 00:21:17.497 "start": 0, 00:21:17.497 "length": 8192 00:21:17.497 }, 00:21:17.497 "queue_depth": 128, 00:21:17.497 "io_size": 4096, 00:21:17.497 "runtime": 1.018567, 00:21:17.497 "iops": 3614.882477048638, 00:21:17.497 "mibps": 14.120634675971242, 00:21:17.497 "io_failed": 0, 00:21:17.497 "io_timeout": 0, 00:21:17.497 "avg_latency_us": 35097.717321101656, 00:21:17.497 "min_latency_us": 6165.2385185185185, 00:21:17.497 "max_latency_us": 42913.943703703706 00:21:17.497 } 00:21:17.497 ], 00:21:17.497 "core_count": 1 00:21:17.497 } 00:21:17.497 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:17.497 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:17.497 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:17.497 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:21:17.497 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:21:17.497 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:17.497 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:17.497 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:17.497 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:17.497 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:17.497 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:17.497 nvmf_trace.0 00:21:17.497 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:21:17.497 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1370376 00:21:17.497 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1370376 ']' 00:21:17.497 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1370376 00:21:17.497 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:17.497 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.497 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1370376 00:21:17.497 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:17.497 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:17.497 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1370376' 00:21:17.497 killing process with pid 1370376 00:21:17.497 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1370376 00:21:17.497 Received shutdown signal, test time was about 1.000000 seconds 00:21:17.497 00:21:17.497 Latency(us) 00:21:17.497 [2024-11-19T09:49:05.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.497 [2024-11-19T09:49:05.120Z] =================================================================================================================== 00:21:17.497 [2024-11-19T09:49:05.121Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:17.498 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1370376 00:21:17.756 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:17.756 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:17.756 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:17.756 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:17.756 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:17.756 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:17.756 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:17.756 rmmod nvme_tcp 00:21:17.756 rmmod nvme_fabrics 00:21:18.015 rmmod nvme_keyring 00:21:18.015 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:18.015 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:18.015 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:18.015 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1370222 ']' 00:21:18.015 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1370222 00:21:18.015 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1370222 ']' 00:21:18.015 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1370222 00:21:18.015 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:18.015 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:18.015 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1370222 00:21:18.015 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:18.015 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:18.015 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1370222' 00:21:18.015 killing process with pid 1370222 00:21:18.015 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1370222 00:21:18.015 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1370222 00:21:18.295 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:18.295 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:18.296 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:18.296 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:18.296 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:21:18.296 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:18.296 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:21:18.296 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:18.296 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:18.296 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.296 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:18.296 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.250 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:20.250 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.kA2XIOh4gz /tmp/tmp.nVpHIgwnoG /tmp/tmp.QihNeMcnYO 00:21:20.250 00:21:20.250 real 1m22.794s 00:21:20.250 user 2m19.174s 00:21:20.250 sys 0m24.485s 00:21:20.250 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:20.250 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.250 ************************************ 00:21:20.250 END TEST nvmf_tls 00:21:20.250 ************************************ 00:21:20.250 10:49:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:20.250 10:49:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:20.250 10:49:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:20.250 10:49:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:20.250 ************************************ 00:21:20.250 START TEST nvmf_fips 00:21:20.250 ************************************ 00:21:20.250 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:20.250 * Looking for test storage... 00:21:20.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:20.250 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:20.250 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:21:20.250 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:20.509 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:20.509 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:20.509 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:20.509 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:20.509 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:20.509 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:20.509 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:20.509 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:20.509 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:20.509 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:20.509 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:20.509 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:20.509 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:20.509 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:20.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.510 --rc genhtml_branch_coverage=1 00:21:20.510 --rc genhtml_function_coverage=1 00:21:20.510 --rc genhtml_legend=1 00:21:20.510 --rc geninfo_all_blocks=1 00:21:20.510 --rc geninfo_unexecuted_blocks=1 00:21:20.510 00:21:20.510 ' 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:20.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.510 --rc genhtml_branch_coverage=1 00:21:20.510 --rc genhtml_function_coverage=1 00:21:20.510 --rc genhtml_legend=1 00:21:20.510 --rc geninfo_all_blocks=1 00:21:20.510 --rc geninfo_unexecuted_blocks=1 00:21:20.510 00:21:20.510 ' 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:20.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.510 --rc genhtml_branch_coverage=1 00:21:20.510 --rc genhtml_function_coverage=1 00:21:20.510 --rc genhtml_legend=1 00:21:20.510 --rc geninfo_all_blocks=1 00:21:20.510 --rc geninfo_unexecuted_blocks=1 00:21:20.510 00:21:20.510 ' 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:20.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.510 --rc genhtml_branch_coverage=1 00:21:20.510 --rc genhtml_function_coverage=1 00:21:20.510 --rc genhtml_legend=1 00:21:20.510 --rc geninfo_all_blocks=1 00:21:20.510 --rc geninfo_unexecuted_blocks=1 00:21:20.510 00:21:20.510 ' 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:20.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:20.510 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:20.511 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:21:20.511 Error setting digest 00:21:20.511 4072B75E7A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:20.511 4072B75E7A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:20.511 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:20.512 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:20.512 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:23.041 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:23.041 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:23.041 Found net devices under 0000:09:00.0: cvl_0_0 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:23.041 Found net devices under 0000:09:00.1: cvl_0_1 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:23.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:21:23.041 00:21:23.041 --- 10.0.0.2 ping statistics --- 00:21:23.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.041 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:23.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:21:23.041 00:21:23.041 --- 10.0.0.1 ping statistics --- 00:21:23.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.041 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1372696 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1372696 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1372696 ']' 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.041 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.042 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.042 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.042 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:23.042 [2024-11-19 10:49:10.434342] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:21:23.042 [2024-11-19 10:49:10.434450] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.042 [2024-11-19 10:49:10.506261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.042 [2024-11-19 10:49:10.563757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.042 [2024-11-19 10:49:10.563828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.042 [2024-11-19 10:49:10.563842] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.042 [2024-11-19 10:49:10.563852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.042 [2024-11-19 10:49:10.563861] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.042 [2024-11-19 10:49:10.564431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.299 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.299 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:23.299 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:23.299 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:23.299 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:23.299 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.299 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:23.299 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:23.299 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:23.299 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.w76 00:21:23.299 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:23.299 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.w76 00:21:23.299 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.w76 00:21:23.299 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.w76 00:21:23.299 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:23.557 [2024-11-19 10:49:10.965970] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.557 [2024-11-19 10:49:10.981963] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:23.557 [2024-11-19 10:49:10.982197] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.557 malloc0 00:21:23.557 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:23.557 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1372771 00:21:23.557 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:23.557 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1372771 /var/tmp/bdevperf.sock 00:21:23.557 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1372771 ']' 00:21:23.557 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:23.557 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.557 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:23.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:23.557 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.557 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:23.557 [2024-11-19 10:49:11.118187] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:21:23.557 [2024-11-19 10:49:11.118277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1372771 ] 00:21:23.815 [2024-11-19 10:49:11.185641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.815 [2024-11-19 10:49:11.245380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.815 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.815 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:23.815 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.w76 00:21:24.073 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:24.331 [2024-11-19 10:49:11.884921] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:24.589 TLSTESTn1 00:21:24.589 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:24.589 Running I/O for 10 seconds... 00:21:26.894 3347.00 IOPS, 13.07 MiB/s [2024-11-19T09:49:15.449Z] 3434.00 IOPS, 13.41 MiB/s [2024-11-19T09:49:16.380Z] 3478.33 IOPS, 13.59 MiB/s [2024-11-19T09:49:17.312Z] 3486.75 IOPS, 13.62 MiB/s [2024-11-19T09:49:18.244Z] 3497.20 IOPS, 13.66 MiB/s [2024-11-19T09:49:19.176Z] 3499.83 IOPS, 13.67 MiB/s [2024-11-19T09:49:20.106Z] 3492.29 IOPS, 13.64 MiB/s [2024-11-19T09:49:21.478Z] 3495.38 IOPS, 13.65 MiB/s [2024-11-19T09:49:22.410Z] 3494.22 IOPS, 13.65 MiB/s [2024-11-19T09:49:22.410Z] 3498.30 IOPS, 13.67 MiB/s 00:21:34.787 Latency(us) 00:21:34.787 [2024-11-19T09:49:22.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.787 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:34.787 Verification LBA range: start 0x0 length 0x2000 00:21:34.787 TLSTESTn1 : 10.02 3505.02 13.69 0.00 0.00 36462.63 6043.88 39030.33 00:21:34.787 [2024-11-19T09:49:22.410Z] =================================================================================================================== 00:21:34.787 [2024-11-19T09:49:22.410Z] Total : 3505.02 13.69 0.00 0.00 36462.63 6043.88 39030.33 00:21:34.787 { 00:21:34.787 "results": [ 00:21:34.787 { 00:21:34.787 "job": "TLSTESTn1", 00:21:34.787 "core_mask": "0x4", 00:21:34.788 "workload": "verify", 00:21:34.788 "status": "finished", 00:21:34.788 "verify_range": { 00:21:34.788 "start": 0, 00:21:34.788 "length": 8192 00:21:34.788 }, 00:21:34.788 "queue_depth": 128, 00:21:34.788 "io_size": 4096, 00:21:34.788 "runtime": 10.017057, 00:21:34.788 "iops": 3505.0214848532855, 00:21:34.788 "mibps": 13.691490175208147, 00:21:34.788 "io_failed": 0, 00:21:34.788 "io_timeout": 0, 00:21:34.788 "avg_latency_us": 36462.6335662099, 00:21:34.788 "min_latency_us": 6043.875555555555, 00:21:34.788 "max_latency_us": 39030.328888888886 00:21:34.788 } 00:21:34.788 ], 00:21:34.788 "core_count": 1 00:21:34.788 } 00:21:34.788 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:34.788 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:34.788 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:34.788 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:34.788 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:34.788 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:34.788 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:34.788 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:34.788 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:34.788 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:34.788 nvmf_trace.0 00:21:34.788 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:34.788 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1372771 00:21:34.788 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1372771 ']' 00:21:34.788 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1372771 00:21:34.788 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:34.788 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.788 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1372771 00:21:34.788 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:34.788 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:34.788 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1372771' 00:21:34.788 killing process with pid 1372771 00:21:34.788 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1372771 00:21:34.788 Received shutdown signal, test time was about 10.000000 seconds 00:21:34.788 00:21:34.788 Latency(us) 00:21:34.788 [2024-11-19T09:49:22.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.788 [2024-11-19T09:49:22.411Z] =================================================================================================================== 00:21:34.788 [2024-11-19T09:49:22.411Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.788 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1372771 00:21:35.046 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:35.046 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:35.046 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:35.046 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:35.046 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:35.046 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:35.046 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:35.046 rmmod nvme_tcp 00:21:35.046 rmmod nvme_fabrics 00:21:35.046 rmmod nvme_keyring 00:21:35.046 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:35.046 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:35.046 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:35.046 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1372696 ']' 00:21:35.046 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1372696 00:21:35.046 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1372696 ']' 00:21:35.046 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1372696 00:21:35.046 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:35.046 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.046 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1372696 00:21:35.046 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:35.046 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:35.046 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1372696' 00:21:35.046 killing process with pid 1372696 00:21:35.046 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1372696 00:21:35.046 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1372696 00:21:35.304 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:35.304 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:35.304 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:35.304 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:35.304 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:35.304 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:35.304 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:35.304 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:35.304 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:35.304 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.304 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.304 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.842 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:37.842 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.w76 00:21:37.842 00:21:37.842 real 0m17.086s 00:21:37.842 user 0m22.373s 00:21:37.842 sys 0m5.694s 00:21:37.842 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:37.842 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:37.842 ************************************ 00:21:37.842 END TEST nvmf_fips 00:21:37.842 ************************************ 00:21:37.842 10:49:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:37.842 10:49:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:37.842 10:49:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:37.842 10:49:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:37.842 ************************************ 00:21:37.842 START TEST nvmf_control_msg_list 00:21:37.842 ************************************ 00:21:37.842 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:37.842 * Looking for test storage... 00:21:37.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:37.842 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:37.842 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:21:37.842 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:37.842 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:37.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.842 --rc genhtml_branch_coverage=1 00:21:37.842 --rc genhtml_function_coverage=1 00:21:37.842 --rc genhtml_legend=1 00:21:37.842 --rc geninfo_all_blocks=1 00:21:37.842 --rc geninfo_unexecuted_blocks=1 00:21:37.842 00:21:37.843 ' 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:37.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.843 --rc genhtml_branch_coverage=1 00:21:37.843 --rc genhtml_function_coverage=1 00:21:37.843 --rc genhtml_legend=1 00:21:37.843 --rc geninfo_all_blocks=1 00:21:37.843 --rc geninfo_unexecuted_blocks=1 00:21:37.843 00:21:37.843 ' 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:37.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.843 --rc genhtml_branch_coverage=1 00:21:37.843 --rc genhtml_function_coverage=1 00:21:37.843 --rc genhtml_legend=1 00:21:37.843 --rc geninfo_all_blocks=1 00:21:37.843 --rc geninfo_unexecuted_blocks=1 00:21:37.843 00:21:37.843 ' 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:37.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.843 --rc genhtml_branch_coverage=1 00:21:37.843 --rc genhtml_function_coverage=1 00:21:37.843 --rc genhtml_legend=1 00:21:37.843 --rc geninfo_all_blocks=1 00:21:37.843 --rc geninfo_unexecuted_blocks=1 00:21:37.843 00:21:37.843 ' 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:37.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:37.843 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:37.844 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.844 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:37.844 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:37.844 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:37.844 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.844 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:37.844 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.844 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:37.844 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:37.844 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:37.844 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:39.748 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:39.749 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:39.749 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:39.749 Found net devices under 0000:09:00.0: cvl_0_0 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:39.749 Found net devices under 0000:09:00.1: cvl_0_1 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:39.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:21:39.749 00:21:39.749 --- 10.0.0.2 ping statistics --- 00:21:39.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.749 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:21:39.749 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:39.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:21:39.750 00:21:39.750 --- 10.0.0.1 ping statistics --- 00:21:39.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.750 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:21:39.750 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.750 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:39.750 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:39.750 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.750 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:39.750 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:39.750 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.750 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:39.750 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:39.750 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:39.750 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:39.750 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:39.750 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:39.750 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1376050 00:21:39.750 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:39.750 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1376050 00:21:39.750 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1376050 ']' 00:21:39.750 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.750 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:39.750 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.750 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:39.750 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:39.750 [2024-11-19 10:49:27.261634] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:21:39.750 [2024-11-19 10:49:27.261723] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.750 [2024-11-19 10:49:27.334271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.008 [2024-11-19 10:49:27.393989] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.008 [2024-11-19 10:49:27.394033] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.008 [2024-11-19 10:49:27.394061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.008 [2024-11-19 10:49:27.394073] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.008 [2024-11-19 10:49:27.394089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.008 [2024-11-19 10:49:27.394763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.008 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.008 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:21:40.008 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:40.008 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:40.008 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:40.008 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.008 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:40.008 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:40.008 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:40.008 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.008 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:40.008 [2024-11-19 10:49:27.533848] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.008 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.008 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:40.008 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.009 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:40.009 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.009 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:40.009 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.009 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:40.009 Malloc0 00:21:40.009 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.009 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:40.009 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.009 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:40.009 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.009 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:40.009 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.009 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:40.009 [2024-11-19 10:49:27.573991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.009 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.009 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1376196 00:21:40.009 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:40.009 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1376197 00:21:40.009 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:40.009 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1376198 00:21:40.009 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1376196 00:21:40.009 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:40.266 [2024-11-19 10:49:27.632542] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:40.266 [2024-11-19 10:49:27.642876] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:40.266 [2024-11-19 10:49:27.643327] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:41.198 Initializing NVMe Controllers 00:21:41.198 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:41.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:41.198 Initialization complete. Launching workers. 00:21:41.198 ======================================================== 00:21:41.198 Latency(us) 00:21:41.198 Device Information : IOPS MiB/s Average min max 00:21:41.198 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4708.00 18.39 212.04 155.70 659.35 00:21:41.198 ======================================================== 00:21:41.198 Total : 4708.00 18.39 212.04 155.70 659.35 00:21:41.198 00:21:41.198 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1376197 00:21:41.455 Initializing NVMe Controllers 00:21:41.455 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:41.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:41.455 Initialization complete. Launching workers. 00:21:41.455 ======================================================== 00:21:41.455 Latency(us) 00:21:41.455 Device Information : IOPS MiB/s Average min max 00:21:41.455 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4532.00 17.70 220.22 176.66 372.70 00:21:41.455 ======================================================== 00:21:41.455 Total : 4532.00 17.70 220.22 176.66 372.70 00:21:41.455 00:21:41.455 Initializing NVMe Controllers 00:21:41.455 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:41.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:41.455 Initialization complete. Launching workers. 00:21:41.456 ======================================================== 00:21:41.456 Latency(us) 00:21:41.456 Device Information : IOPS MiB/s Average min max 00:21:41.456 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40887.38 40583.36 41011.86 00:21:41.456 ======================================================== 00:21:41.456 Total : 25.00 0.10 40887.38 40583.36 41011.86 00:21:41.456 00:21:41.456 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1376198 00:21:41.456 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:41.456 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:41.456 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:41.456 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:41.456 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:41.456 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:41.456 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:41.456 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:41.456 rmmod nvme_tcp 00:21:41.456 rmmod nvme_fabrics 00:21:41.456 rmmod nvme_keyring 00:21:41.456 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:41.456 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:41.456 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:41.456 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1376050 ']' 00:21:41.456 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1376050 00:21:41.456 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1376050 ']' 00:21:41.456 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1376050 00:21:41.456 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:21:41.456 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.456 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1376050 00:21:41.456 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:41.456 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:41.456 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1376050' 00:21:41.456 killing process with pid 1376050 00:21:41.456 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1376050 00:21:41.456 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1376050 00:21:41.715 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:41.715 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:41.715 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:41.715 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:41.715 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:41.715 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:41.715 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:41.715 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:41.715 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:41.715 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.715 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.715 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.253 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:44.253 00:21:44.253 real 0m6.375s 00:21:44.253 user 0m5.729s 00:21:44.253 sys 0m2.733s 00:21:44.253 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.253 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:44.253 ************************************ 00:21:44.253 END TEST nvmf_control_msg_list 00:21:44.253 ************************************ 00:21:44.253 10:49:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:44.253 10:49:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:44.253 10:49:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:44.253 10:49:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:44.253 ************************************ 00:21:44.253 START TEST nvmf_wait_for_buf 00:21:44.253 ************************************ 00:21:44.253 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:44.253 * Looking for test storage... 00:21:44.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:44.253 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:44.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.254 --rc genhtml_branch_coverage=1 00:21:44.254 --rc genhtml_function_coverage=1 00:21:44.254 --rc genhtml_legend=1 00:21:44.254 --rc geninfo_all_blocks=1 00:21:44.254 --rc geninfo_unexecuted_blocks=1 00:21:44.254 00:21:44.254 ' 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:44.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.254 --rc genhtml_branch_coverage=1 00:21:44.254 --rc genhtml_function_coverage=1 00:21:44.254 --rc genhtml_legend=1 00:21:44.254 --rc geninfo_all_blocks=1 00:21:44.254 --rc geninfo_unexecuted_blocks=1 00:21:44.254 00:21:44.254 ' 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:44.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.254 --rc genhtml_branch_coverage=1 00:21:44.254 --rc genhtml_function_coverage=1 00:21:44.254 --rc genhtml_legend=1 00:21:44.254 --rc geninfo_all_blocks=1 00:21:44.254 --rc geninfo_unexecuted_blocks=1 00:21:44.254 00:21:44.254 ' 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:44.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.254 --rc genhtml_branch_coverage=1 00:21:44.254 --rc genhtml_function_coverage=1 00:21:44.254 --rc genhtml_legend=1 00:21:44.254 --rc geninfo_all_blocks=1 00:21:44.254 --rc geninfo_unexecuted_blocks=1 00:21:44.254 00:21:44.254 ' 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.254 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:44.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:44.255 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:46.157 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:46.158 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:46.158 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:46.158 Found net devices under 0000:09:00.0: cvl_0_0 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:46.158 Found net devices under 0000:09:00.1: cvl_0_1 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:46.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:21:46.158 00:21:46.158 --- 10.0.0.2 ping statistics --- 00:21:46.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.158 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:46.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:21:46.158 00:21:46.158 --- 10.0.0.1 ping statistics --- 00:21:46.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.158 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1378273 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:46.158 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1378273 00:21:46.159 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1378273 ']' 00:21:46.159 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.159 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.159 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.159 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.159 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.159 [2024-11-19 10:49:33.749749] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:21:46.159 [2024-11-19 10:49:33.749832] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.417 [2024-11-19 10:49:33.819678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.417 [2024-11-19 10:49:33.872750] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.417 [2024-11-19 10:49:33.872801] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.417 [2024-11-19 10:49:33.872813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.417 [2024-11-19 10:49:33.872824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.417 [2024-11-19 10:49:33.872833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.417 [2024-11-19 10:49:33.873439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.417 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:46.417 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:21:46.417 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:46.417 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:46.417 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.417 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.417 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:46.417 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:46.417 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:46.418 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.418 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.418 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.418 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:46.418 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.418 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.418 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.418 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:46.418 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.418 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.676 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.676 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:46.676 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.676 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.676 Malloc0 00:21:46.676 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.676 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:46.676 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.676 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.676 [2024-11-19 10:49:34.110509] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.676 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.676 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:46.676 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.676 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.676 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.676 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:46.676 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.676 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.676 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.676 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:46.676 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.676 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.676 [2024-11-19 10:49:34.134722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.676 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.676 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:46.676 [2024-11-19 10:49:34.213445] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:48.051 Initializing NVMe Controllers 00:21:48.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:48.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:48.051 Initialization complete. Launching workers. 00:21:48.051 ======================================================== 00:21:48.051 Latency(us) 00:21:48.051 Device Information : IOPS MiB/s Average min max 00:21:48.051 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 46.00 5.75 90638.09 31936.63 191531.93 00:21:48.051 ======================================================== 00:21:48.051 Total : 46.00 5.75 90638.09 31936.63 191531.93 00:21:48.051 00:21:48.051 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:48.051 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:48.051 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.051 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:48.051 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.052 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=710 00:21:48.052 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 710 -eq 0 ]] 00:21:48.052 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:48.052 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:48.052 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:48.052 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:48.052 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:48.052 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:48.052 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:48.052 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:48.052 rmmod nvme_tcp 00:21:48.052 rmmod nvme_fabrics 00:21:48.052 rmmod nvme_keyring 00:21:48.052 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:48.052 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:48.052 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:48.052 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1378273 ']' 00:21:48.052 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1378273 00:21:48.052 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1378273 ']' 00:21:48.052 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1378273 00:21:48.052 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:21:48.052 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:48.052 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1378273 00:21:48.316 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:48.316 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:48.316 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1378273' 00:21:48.316 killing process with pid 1378273 00:21:48.316 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1378273 00:21:48.316 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1378273 00:21:48.316 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:48.316 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:48.316 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:48.316 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:48.316 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:48.316 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:48.316 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:48.316 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:48.316 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:48.316 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.317 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.317 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.892 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:50.892 00:21:50.892 real 0m6.613s 00:21:50.892 user 0m3.045s 00:21:50.892 sys 0m2.018s 00:21:50.892 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:50.892 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:50.892 ************************************ 00:21:50.892 END TEST nvmf_wait_for_buf 00:21:50.892 ************************************ 00:21:50.892 10:49:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:50.892 10:49:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:50.892 10:49:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:50.892 10:49:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:50.892 10:49:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:50.892 10:49:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:52.797 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:52.797 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.797 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:52.798 Found net devices under 0000:09:00.0: cvl_0_0 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:52.798 Found net devices under 0000:09:00.1: cvl_0_1 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:52.798 ************************************ 00:21:52.798 START TEST nvmf_perf_adq 00:21:52.798 ************************************ 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:52.798 * Looking for test storage... 00:21:52.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:52.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.798 --rc genhtml_branch_coverage=1 00:21:52.798 --rc genhtml_function_coverage=1 00:21:52.798 --rc genhtml_legend=1 00:21:52.798 --rc geninfo_all_blocks=1 00:21:52.798 --rc geninfo_unexecuted_blocks=1 00:21:52.798 00:21:52.798 ' 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:52.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.798 --rc genhtml_branch_coverage=1 00:21:52.798 --rc genhtml_function_coverage=1 00:21:52.798 --rc genhtml_legend=1 00:21:52.798 --rc geninfo_all_blocks=1 00:21:52.798 --rc geninfo_unexecuted_blocks=1 00:21:52.798 00:21:52.798 ' 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:52.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.798 --rc genhtml_branch_coverage=1 00:21:52.798 --rc genhtml_function_coverage=1 00:21:52.798 --rc genhtml_legend=1 00:21:52.798 --rc geninfo_all_blocks=1 00:21:52.798 --rc geninfo_unexecuted_blocks=1 00:21:52.798 00:21:52.798 ' 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:52.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.798 --rc genhtml_branch_coverage=1 00:21:52.798 --rc genhtml_function_coverage=1 00:21:52.798 --rc genhtml_legend=1 00:21:52.798 --rc geninfo_all_blocks=1 00:21:52.798 --rc geninfo_unexecuted_blocks=1 00:21:52.798 00:21:52.798 ' 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.798 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:52.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:52.799 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:55.333 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:55.333 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:55.333 Found net devices under 0000:09:00.0: cvl_0_0 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:55.333 Found net devices under 0000:09:00.1: cvl_0_1 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:55.333 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:55.593 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:57.496 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:02.812 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:02.812 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:02.813 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:02.813 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:02.813 Found net devices under 0000:09:00.0: cvl_0_0 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:02.813 Found net devices under 0000:09:00.1: cvl_0_1 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.813 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:02.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:22:02.814 00:22:02.814 --- 10.0.0.2 ping statistics --- 00:22:02.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.814 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:22:02.814 00:22:02.814 --- 10.0.0.1 ping statistics --- 00:22:02.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.814 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1383003 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1383003 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1383003 ']' 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:02.814 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.814 [2024-11-19 10:49:50.260705] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:22:02.814 [2024-11-19 10:49:50.260789] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.814 [2024-11-19 10:49:50.338158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:02.814 [2024-11-19 10:49:50.403670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.814 [2024-11-19 10:49:50.403717] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.814 [2024-11-19 10:49:50.403731] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.814 [2024-11-19 10:49:50.403743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.814 [2024-11-19 10:49:50.403752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.814 [2024-11-19 10:49:50.405381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.814 [2024-11-19 10:49:50.405446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.814 [2024-11-19 10:49:50.405508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:02.814 [2024-11-19 10:49:50.405512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.072 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.072 [2024-11-19 10:49:50.680811] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.073 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.073 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:03.073 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.073 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.330 Malloc1 00:22:03.331 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.331 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:03.331 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.331 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.331 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.331 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:03.331 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.331 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.331 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.331 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:03.331 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.331 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.331 [2024-11-19 10:49:50.750872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.331 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.331 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1383155 00:22:03.331 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:03.331 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:05.232 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:05.232 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.232 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.232 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.232 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:05.232 "tick_rate": 2700000000, 00:22:05.232 "poll_groups": [ 00:22:05.232 { 00:22:05.232 "name": "nvmf_tgt_poll_group_000", 00:22:05.232 "admin_qpairs": 1, 00:22:05.232 "io_qpairs": 1, 00:22:05.232 "current_admin_qpairs": 1, 00:22:05.232 "current_io_qpairs": 1, 00:22:05.232 "pending_bdev_io": 0, 00:22:05.232 "completed_nvme_io": 19895, 00:22:05.232 "transports": [ 00:22:05.232 { 00:22:05.232 "trtype": "TCP" 00:22:05.232 } 00:22:05.232 ] 00:22:05.232 }, 00:22:05.232 { 00:22:05.232 "name": "nvmf_tgt_poll_group_001", 00:22:05.232 "admin_qpairs": 0, 00:22:05.232 "io_qpairs": 1, 00:22:05.232 "current_admin_qpairs": 0, 00:22:05.232 "current_io_qpairs": 1, 00:22:05.232 "pending_bdev_io": 0, 00:22:05.232 "completed_nvme_io": 19825, 00:22:05.232 "transports": [ 00:22:05.232 { 00:22:05.232 "trtype": "TCP" 00:22:05.232 } 00:22:05.232 ] 00:22:05.232 }, 00:22:05.232 { 00:22:05.232 "name": "nvmf_tgt_poll_group_002", 00:22:05.232 "admin_qpairs": 0, 00:22:05.232 "io_qpairs": 1, 00:22:05.232 "current_admin_qpairs": 0, 00:22:05.232 "current_io_qpairs": 1, 00:22:05.232 "pending_bdev_io": 0, 00:22:05.232 "completed_nvme_io": 20292, 00:22:05.232 "transports": [ 00:22:05.232 { 00:22:05.232 "trtype": "TCP" 00:22:05.232 } 00:22:05.232 ] 00:22:05.232 }, 00:22:05.232 { 00:22:05.232 "name": "nvmf_tgt_poll_group_003", 00:22:05.232 "admin_qpairs": 0, 00:22:05.232 "io_qpairs": 1, 00:22:05.232 "current_admin_qpairs": 0, 00:22:05.232 "current_io_qpairs": 1, 00:22:05.232 "pending_bdev_io": 0, 00:22:05.232 "completed_nvme_io": 19564, 00:22:05.232 "transports": [ 00:22:05.232 { 00:22:05.232 "trtype": "TCP" 00:22:05.232 } 00:22:05.232 ] 00:22:05.232 } 00:22:05.232 ] 00:22:05.232 }' 00:22:05.232 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:05.232 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:05.232 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:05.232 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:05.232 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1383155 00:22:13.338 Initializing NVMe Controllers 00:22:13.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:13.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:13.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:13.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:13.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:13.338 Initialization complete. Launching workers. 00:22:13.338 ======================================================== 00:22:13.338 Latency(us) 00:22:13.338 Device Information : IOPS MiB/s Average min max 00:22:13.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10264.50 40.10 6235.51 2312.14 10465.91 00:22:13.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10497.10 41.00 6096.86 2512.56 10310.77 00:22:13.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10770.60 42.07 5944.11 2268.44 10034.89 00:22:13.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10558.20 41.24 6062.06 2381.45 10299.46 00:22:13.338 ======================================================== 00:22:13.338 Total : 42090.40 164.42 6082.85 2268.44 10465.91 00:22:13.338 00:22:13.338 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:13.338 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:13.338 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:13.338 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:13.338 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:13.338 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:13.338 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:13.338 rmmod nvme_tcp 00:22:13.338 rmmod nvme_fabrics 00:22:13.338 rmmod nvme_keyring 00:22:13.596 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:13.596 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:13.596 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:13.596 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1383003 ']' 00:22:13.596 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1383003 00:22:13.596 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1383003 ']' 00:22:13.596 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1383003 00:22:13.596 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:13.596 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:13.596 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1383003 00:22:13.596 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:13.596 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:13.596 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1383003' 00:22:13.596 killing process with pid 1383003 00:22:13.596 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1383003 00:22:13.596 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1383003 00:22:13.855 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:13.855 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:13.855 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:13.855 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:13.855 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:13.855 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:13.855 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:13.855 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:13.855 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:13.855 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.855 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.855 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.761 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:15.761 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:15.761 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:15.762 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:16.329 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:18.230 10:50:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:23.504 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:23.504 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:23.504 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:23.505 Found net devices under 0000:09:00.0: cvl_0_0 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:23.505 Found net devices under 0000:09:00.1: cvl_0_1 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:23.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:22:23.505 00:22:23.505 --- 10.0.0.2 ping statistics --- 00:22:23.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.505 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:23.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:22:23.505 00:22:23.505 --- 10.0.0.1 ping statistics --- 00:22:23.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.505 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:23.505 net.core.busy_poll = 1 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:23.505 net.core.busy_read = 1 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:23.505 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:23.505 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:23.505 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:23.505 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:23.505 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:23.505 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:23.505 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:23.505 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.505 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1385765 00:22:23.505 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:23.505 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1385765 00:22:23.505 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1385765 ']' 00:22:23.505 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.505 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.506 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.506 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.506 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.764 [2024-11-19 10:50:11.145048] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:22:23.764 [2024-11-19 10:50:11.145113] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.764 [2024-11-19 10:50:11.213224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:23.764 [2024-11-19 10:50:11.268252] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.764 [2024-11-19 10:50:11.268311] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.764 [2024-11-19 10:50:11.268326] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.764 [2024-11-19 10:50:11.268336] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.764 [2024-11-19 10:50:11.268345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.764 [2024-11-19 10:50:11.269750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.764 [2024-11-19 10:50:11.269811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.764 [2024-11-19 10:50:11.269879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:23.764 [2024-11-19 10:50:11.269882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.022 [2024-11-19 10:50:11.569466] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.022 Malloc1 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.022 [2024-11-19 10:50:11.638334] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1385803 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:24.022 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:26.028 10:50:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:26.028 10:50:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.028 10:50:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:26.286 10:50:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.286 10:50:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:26.286 "tick_rate": 2700000000, 00:22:26.286 "poll_groups": [ 00:22:26.286 { 00:22:26.286 "name": "nvmf_tgt_poll_group_000", 00:22:26.286 "admin_qpairs": 1, 00:22:26.286 "io_qpairs": 3, 00:22:26.286 "current_admin_qpairs": 1, 00:22:26.286 "current_io_qpairs": 3, 00:22:26.286 "pending_bdev_io": 0, 00:22:26.286 "completed_nvme_io": 24730, 00:22:26.286 "transports": [ 00:22:26.286 { 00:22:26.286 "trtype": "TCP" 00:22:26.286 } 00:22:26.286 ] 00:22:26.286 }, 00:22:26.286 { 00:22:26.286 "name": "nvmf_tgt_poll_group_001", 00:22:26.286 "admin_qpairs": 0, 00:22:26.286 "io_qpairs": 1, 00:22:26.286 "current_admin_qpairs": 0, 00:22:26.286 "current_io_qpairs": 1, 00:22:26.286 "pending_bdev_io": 0, 00:22:26.286 "completed_nvme_io": 23567, 00:22:26.286 "transports": [ 00:22:26.286 { 00:22:26.286 "trtype": "TCP" 00:22:26.286 } 00:22:26.286 ] 00:22:26.286 }, 00:22:26.286 { 00:22:26.286 "name": "nvmf_tgt_poll_group_002", 00:22:26.286 "admin_qpairs": 0, 00:22:26.286 "io_qpairs": 0, 00:22:26.286 "current_admin_qpairs": 0, 00:22:26.286 "current_io_qpairs": 0, 00:22:26.286 "pending_bdev_io": 0, 00:22:26.286 "completed_nvme_io": 0, 00:22:26.286 "transports": [ 00:22:26.286 { 00:22:26.286 "trtype": "TCP" 00:22:26.286 } 00:22:26.286 ] 00:22:26.287 }, 00:22:26.287 { 00:22:26.287 "name": "nvmf_tgt_poll_group_003", 00:22:26.287 "admin_qpairs": 0, 00:22:26.287 "io_qpairs": 0, 00:22:26.287 "current_admin_qpairs": 0, 00:22:26.287 "current_io_qpairs": 0, 00:22:26.287 "pending_bdev_io": 0, 00:22:26.287 "completed_nvme_io": 0, 00:22:26.287 "transports": [ 00:22:26.287 { 00:22:26.287 "trtype": "TCP" 00:22:26.287 } 00:22:26.287 ] 00:22:26.287 } 00:22:26.287 ] 00:22:26.287 }' 00:22:26.287 10:50:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:26.287 10:50:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:26.287 10:50:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:26.287 10:50:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:26.287 10:50:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1385803 00:22:34.397 Initializing NVMe Controllers 00:22:34.397 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:34.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:34.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:34.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:34.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:34.397 Initialization complete. Launching workers. 00:22:34.397 ======================================================== 00:22:34.397 Latency(us) 00:22:34.397 Device Information : IOPS MiB/s Average min max 00:22:34.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4436.80 17.33 14427.33 1780.44 64169.86 00:22:34.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4264.90 16.66 15059.59 2166.90 63085.14 00:22:34.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4869.10 19.02 13203.92 2114.54 59933.57 00:22:34.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13056.00 51.00 4902.51 1726.45 7935.11 00:22:34.397 ======================================================== 00:22:34.397 Total : 26626.79 104.01 9634.55 1726.45 64169.86 00:22:34.397 00:22:34.397 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:34.397 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:34.397 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:34.397 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:34.397 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:34.397 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:34.397 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:34.397 rmmod nvme_tcp 00:22:34.397 rmmod nvme_fabrics 00:22:34.397 rmmod nvme_keyring 00:22:34.397 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:34.397 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:34.397 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:34.397 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1385765 ']' 00:22:34.397 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1385765 00:22:34.398 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1385765 ']' 00:22:34.398 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1385765 00:22:34.398 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:34.398 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:34.398 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1385765 00:22:34.398 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:34.398 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:34.398 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1385765' 00:22:34.398 killing process with pid 1385765 00:22:34.398 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1385765 00:22:34.398 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1385765 00:22:34.656 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:34.656 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:34.656 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:34.656 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:34.656 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:34.656 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:34.656 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:34.656 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:34.656 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:34.656 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.656 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.656 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.192 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:37.192 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:37.192 00:22:37.192 real 0m44.087s 00:22:37.192 user 2m41.791s 00:22:37.192 sys 0m8.955s 00:22:37.192 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.192 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.192 ************************************ 00:22:37.192 END TEST nvmf_perf_adq 00:22:37.192 ************************************ 00:22:37.192 10:50:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:37.192 10:50:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:37.192 10:50:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:37.192 10:50:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:37.192 ************************************ 00:22:37.192 START TEST nvmf_shutdown 00:22:37.192 ************************************ 00:22:37.192 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:37.192 * Looking for test storage... 00:22:37.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:37.192 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:37.192 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:37.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.193 --rc genhtml_branch_coverage=1 00:22:37.193 --rc genhtml_function_coverage=1 00:22:37.193 --rc genhtml_legend=1 00:22:37.193 --rc geninfo_all_blocks=1 00:22:37.193 --rc geninfo_unexecuted_blocks=1 00:22:37.193 00:22:37.193 ' 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:37.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.193 --rc genhtml_branch_coverage=1 00:22:37.193 --rc genhtml_function_coverage=1 00:22:37.193 --rc genhtml_legend=1 00:22:37.193 --rc geninfo_all_blocks=1 00:22:37.193 --rc geninfo_unexecuted_blocks=1 00:22:37.193 00:22:37.193 ' 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:37.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.193 --rc genhtml_branch_coverage=1 00:22:37.193 --rc genhtml_function_coverage=1 00:22:37.193 --rc genhtml_legend=1 00:22:37.193 --rc geninfo_all_blocks=1 00:22:37.193 --rc geninfo_unexecuted_blocks=1 00:22:37.193 00:22:37.193 ' 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:37.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.193 --rc genhtml_branch_coverage=1 00:22:37.193 --rc genhtml_function_coverage=1 00:22:37.193 --rc genhtml_legend=1 00:22:37.193 --rc geninfo_all_blocks=1 00:22:37.193 --rc geninfo_unexecuted_blocks=1 00:22:37.193 00:22:37.193 ' 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.193 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:37.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:37.194 ************************************ 00:22:37.194 START TEST nvmf_shutdown_tc1 00:22:37.194 ************************************ 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:37.194 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:39.727 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:39.727 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:39.727 Found net devices under 0000:09:00.0: cvl_0_0 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:39.727 Found net devices under 0000:09:00.1: cvl_0_1 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:39.727 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:39.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:22:39.728 00:22:39.728 --- 10.0.0.2 ping statistics --- 00:22:39.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.728 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:22:39.728 00:22:39.728 --- 10.0.0.1 ping statistics --- 00:22:39.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.728 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1389088 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1389088 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1389088 ']' 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.728 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.728 [2024-11-19 10:50:26.995797] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:22:39.728 [2024-11-19 10:50:26.995894] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.728 [2024-11-19 10:50:27.069491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:39.728 [2024-11-19 10:50:27.127560] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.728 [2024-11-19 10:50:27.127622] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.728 [2024-11-19 10:50:27.127636] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.728 [2024-11-19 10:50:27.127657] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.728 [2024-11-19 10:50:27.127667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.728 [2024-11-19 10:50:27.129204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.728 [2024-11-19 10:50:27.129362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:39.728 [2024-11-19 10:50:27.129434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:39.728 [2024-11-19 10:50:27.133334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.728 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:39.728 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:39.728 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:39.728 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:39.728 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.728 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.728 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:39.728 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.728 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.728 [2024-11-19 10:50:27.283582] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.728 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.728 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:39.728 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:39.728 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.728 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.729 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.987 Malloc1 00:22:39.987 [2024-11-19 10:50:27.381958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.987 Malloc2 00:22:39.987 Malloc3 00:22:39.987 Malloc4 00:22:39.987 Malloc5 00:22:39.987 Malloc6 00:22:40.244 Malloc7 00:22:40.244 Malloc8 00:22:40.244 Malloc9 00:22:40.244 Malloc10 00:22:40.244 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.244 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:40.244 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.244 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:40.244 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1389155 00:22:40.244 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1389155 /var/tmp/bdevperf.sock 00:22:40.244 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1389155 ']' 00:22:40.244 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:40.244 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:40.244 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:40.244 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.244 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:40.244 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:40.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:40.244 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:40.244 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.244 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:40.244 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.244 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.244 { 00:22:40.244 "params": { 00:22:40.244 "name": "Nvme$subsystem", 00:22:40.244 "trtype": "$TEST_TRANSPORT", 00:22:40.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.244 "adrfam": "ipv4", 00:22:40.244 "trsvcid": "$NVMF_PORT", 00:22:40.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.244 "hdgst": ${hdgst:-false}, 00:22:40.244 "ddgst": ${ddgst:-false} 00:22:40.244 }, 00:22:40.244 "method": "bdev_nvme_attach_controller" 00:22:40.244 } 00:22:40.244 EOF 00:22:40.244 )") 00:22:40.244 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.244 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.244 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.244 { 00:22:40.244 "params": { 00:22:40.244 "name": "Nvme$subsystem", 00:22:40.244 "trtype": "$TEST_TRANSPORT", 00:22:40.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.244 "adrfam": "ipv4", 00:22:40.244 "trsvcid": "$NVMF_PORT", 00:22:40.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.244 "hdgst": ${hdgst:-false}, 00:22:40.244 "ddgst": ${ddgst:-false} 00:22:40.244 }, 00:22:40.244 "method": "bdev_nvme_attach_controller" 00:22:40.244 } 00:22:40.244 EOF 00:22:40.244 )") 00:22:40.244 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.503 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.503 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.503 { 00:22:40.503 "params": { 00:22:40.503 "name": "Nvme$subsystem", 00:22:40.503 "trtype": "$TEST_TRANSPORT", 00:22:40.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.503 "adrfam": "ipv4", 00:22:40.503 "trsvcid": "$NVMF_PORT", 00:22:40.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.503 "hdgst": ${hdgst:-false}, 00:22:40.503 "ddgst": ${ddgst:-false} 00:22:40.503 }, 00:22:40.503 "method": "bdev_nvme_attach_controller" 00:22:40.503 } 00:22:40.503 EOF 00:22:40.503 )") 00:22:40.503 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.503 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.503 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.503 { 00:22:40.503 "params": { 00:22:40.503 "name": "Nvme$subsystem", 00:22:40.503 "trtype": "$TEST_TRANSPORT", 00:22:40.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.503 "adrfam": "ipv4", 00:22:40.503 "trsvcid": "$NVMF_PORT", 00:22:40.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.503 "hdgst": ${hdgst:-false}, 00:22:40.503 "ddgst": ${ddgst:-false} 00:22:40.503 }, 00:22:40.503 "method": "bdev_nvme_attach_controller" 00:22:40.503 } 00:22:40.503 EOF 00:22:40.503 )") 00:22:40.503 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.503 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.503 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.503 { 00:22:40.503 "params": { 00:22:40.503 "name": "Nvme$subsystem", 00:22:40.503 "trtype": "$TEST_TRANSPORT", 00:22:40.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.503 "adrfam": "ipv4", 00:22:40.503 "trsvcid": "$NVMF_PORT", 00:22:40.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.503 "hdgst": ${hdgst:-false}, 00:22:40.503 "ddgst": ${ddgst:-false} 00:22:40.503 }, 00:22:40.503 "method": "bdev_nvme_attach_controller" 00:22:40.503 } 00:22:40.503 EOF 00:22:40.503 )") 00:22:40.503 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.503 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.503 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.503 { 00:22:40.503 "params": { 00:22:40.503 "name": "Nvme$subsystem", 00:22:40.503 "trtype": "$TEST_TRANSPORT", 00:22:40.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.503 "adrfam": "ipv4", 00:22:40.504 "trsvcid": "$NVMF_PORT", 00:22:40.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.504 "hdgst": ${hdgst:-false}, 00:22:40.504 "ddgst": ${ddgst:-false} 00:22:40.504 }, 00:22:40.504 "method": "bdev_nvme_attach_controller" 00:22:40.504 } 00:22:40.504 EOF 00:22:40.504 )") 00:22:40.504 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.504 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.504 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.504 { 00:22:40.504 "params": { 00:22:40.504 "name": "Nvme$subsystem", 00:22:40.504 "trtype": "$TEST_TRANSPORT", 00:22:40.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.504 "adrfam": "ipv4", 00:22:40.504 "trsvcid": "$NVMF_PORT", 00:22:40.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.504 "hdgst": ${hdgst:-false}, 00:22:40.504 "ddgst": ${ddgst:-false} 00:22:40.504 }, 00:22:40.504 "method": "bdev_nvme_attach_controller" 00:22:40.504 } 00:22:40.504 EOF 00:22:40.504 )") 00:22:40.504 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.504 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.504 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.504 { 00:22:40.504 "params": { 00:22:40.504 "name": "Nvme$subsystem", 00:22:40.504 "trtype": "$TEST_TRANSPORT", 00:22:40.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.504 "adrfam": "ipv4", 00:22:40.504 "trsvcid": "$NVMF_PORT", 00:22:40.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.504 "hdgst": ${hdgst:-false}, 00:22:40.504 "ddgst": ${ddgst:-false} 00:22:40.504 }, 00:22:40.504 "method": "bdev_nvme_attach_controller" 00:22:40.504 } 00:22:40.504 EOF 00:22:40.504 )") 00:22:40.504 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.504 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.504 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.504 { 00:22:40.504 "params": { 00:22:40.504 "name": "Nvme$subsystem", 00:22:40.504 "trtype": "$TEST_TRANSPORT", 00:22:40.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.504 "adrfam": "ipv4", 00:22:40.504 "trsvcid": "$NVMF_PORT", 00:22:40.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.504 "hdgst": ${hdgst:-false}, 00:22:40.504 "ddgst": ${ddgst:-false} 00:22:40.504 }, 00:22:40.504 "method": "bdev_nvme_attach_controller" 00:22:40.504 } 00:22:40.504 EOF 00:22:40.504 )") 00:22:40.504 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.504 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.504 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.504 { 00:22:40.504 "params": { 00:22:40.504 "name": "Nvme$subsystem", 00:22:40.504 "trtype": "$TEST_TRANSPORT", 00:22:40.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.504 "adrfam": "ipv4", 00:22:40.504 "trsvcid": "$NVMF_PORT", 00:22:40.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.504 "hdgst": ${hdgst:-false}, 00:22:40.504 "ddgst": ${ddgst:-false} 00:22:40.504 }, 00:22:40.504 "method": "bdev_nvme_attach_controller" 00:22:40.504 } 00:22:40.504 EOF 00:22:40.504 )") 00:22:40.504 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.504 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:40.504 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:40.504 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:40.504 "params": { 00:22:40.504 "name": "Nvme1", 00:22:40.504 "trtype": "tcp", 00:22:40.504 "traddr": "10.0.0.2", 00:22:40.504 "adrfam": "ipv4", 00:22:40.504 "trsvcid": "4420", 00:22:40.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:40.504 "hdgst": false, 00:22:40.504 "ddgst": false 00:22:40.504 }, 00:22:40.504 "method": "bdev_nvme_attach_controller" 00:22:40.504 },{ 00:22:40.504 "params": { 00:22:40.504 "name": "Nvme2", 00:22:40.504 "trtype": "tcp", 00:22:40.504 "traddr": "10.0.0.2", 00:22:40.504 "adrfam": "ipv4", 00:22:40.504 "trsvcid": "4420", 00:22:40.504 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:40.504 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:40.504 "hdgst": false, 00:22:40.504 "ddgst": false 00:22:40.504 }, 00:22:40.504 "method": "bdev_nvme_attach_controller" 00:22:40.504 },{ 00:22:40.504 "params": { 00:22:40.504 "name": "Nvme3", 00:22:40.504 "trtype": "tcp", 00:22:40.504 "traddr": "10.0.0.2", 00:22:40.504 "adrfam": "ipv4", 00:22:40.504 "trsvcid": "4420", 00:22:40.504 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:40.504 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:40.504 "hdgst": false, 00:22:40.504 "ddgst": false 00:22:40.504 }, 00:22:40.504 "method": "bdev_nvme_attach_controller" 00:22:40.504 },{ 00:22:40.504 "params": { 00:22:40.504 "name": "Nvme4", 00:22:40.504 "trtype": "tcp", 00:22:40.504 "traddr": "10.0.0.2", 00:22:40.504 "adrfam": "ipv4", 00:22:40.504 "trsvcid": "4420", 00:22:40.504 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:40.504 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:40.504 "hdgst": false, 00:22:40.504 "ddgst": false 00:22:40.504 }, 00:22:40.504 "method": "bdev_nvme_attach_controller" 00:22:40.504 },{ 00:22:40.504 "params": { 00:22:40.504 "name": "Nvme5", 00:22:40.504 "trtype": "tcp", 00:22:40.504 "traddr": "10.0.0.2", 00:22:40.504 "adrfam": "ipv4", 00:22:40.504 "trsvcid": "4420", 00:22:40.504 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:40.504 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:40.504 "hdgst": false, 00:22:40.505 "ddgst": false 00:22:40.505 }, 00:22:40.505 "method": "bdev_nvme_attach_controller" 00:22:40.505 },{ 00:22:40.505 "params": { 00:22:40.505 "name": "Nvme6", 00:22:40.505 "trtype": "tcp", 00:22:40.505 "traddr": "10.0.0.2", 00:22:40.505 "adrfam": "ipv4", 00:22:40.505 "trsvcid": "4420", 00:22:40.505 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:40.505 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:40.505 "hdgst": false, 00:22:40.505 "ddgst": false 00:22:40.505 }, 00:22:40.505 "method": "bdev_nvme_attach_controller" 00:22:40.505 },{ 00:22:40.505 "params": { 00:22:40.505 "name": "Nvme7", 00:22:40.505 "trtype": "tcp", 00:22:40.505 "traddr": "10.0.0.2", 00:22:40.505 "adrfam": "ipv4", 00:22:40.505 "trsvcid": "4420", 00:22:40.505 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:40.505 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:40.505 "hdgst": false, 00:22:40.505 "ddgst": false 00:22:40.505 }, 00:22:40.505 "method": "bdev_nvme_attach_controller" 00:22:40.505 },{ 00:22:40.505 "params": { 00:22:40.505 "name": "Nvme8", 00:22:40.505 "trtype": "tcp", 00:22:40.505 "traddr": "10.0.0.2", 00:22:40.505 "adrfam": "ipv4", 00:22:40.505 "trsvcid": "4420", 00:22:40.505 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:40.505 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:40.505 "hdgst": false, 00:22:40.505 "ddgst": false 00:22:40.505 }, 00:22:40.505 "method": "bdev_nvme_attach_controller" 00:22:40.505 },{ 00:22:40.505 "params": { 00:22:40.505 "name": "Nvme9", 00:22:40.505 "trtype": "tcp", 00:22:40.505 "traddr": "10.0.0.2", 00:22:40.505 "adrfam": "ipv4", 00:22:40.505 "trsvcid": "4420", 00:22:40.505 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:40.505 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:40.505 "hdgst": false, 00:22:40.505 "ddgst": false 00:22:40.505 }, 00:22:40.505 "method": "bdev_nvme_attach_controller" 00:22:40.505 },{ 00:22:40.505 "params": { 00:22:40.505 "name": "Nvme10", 00:22:40.505 "trtype": "tcp", 00:22:40.505 "traddr": "10.0.0.2", 00:22:40.505 "adrfam": "ipv4", 00:22:40.505 "trsvcid": "4420", 00:22:40.505 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:40.505 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:40.505 "hdgst": false, 00:22:40.505 "ddgst": false 00:22:40.505 }, 00:22:40.505 "method": "bdev_nvme_attach_controller" 00:22:40.505 }' 00:22:40.505 [2024-11-19 10:50:27.906255] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:22:40.505 [2024-11-19 10:50:27.906356] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:40.505 [2024-11-19 10:50:27.979233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.505 [2024-11-19 10:50:28.040604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.406 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.406 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:42.406 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:42.406 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.406 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:42.406 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.406 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1389155 00:22:42.406 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:42.406 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:43.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1389155 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:43.781 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1389088 00:22:43.781 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:43.781 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:43.781 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:43.781 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:43.781 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.781 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.781 { 00:22:43.781 "params": { 00:22:43.781 "name": "Nvme$subsystem", 00:22:43.781 "trtype": "$TEST_TRANSPORT", 00:22:43.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.781 "adrfam": "ipv4", 00:22:43.781 "trsvcid": "$NVMF_PORT", 00:22:43.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.781 "hdgst": ${hdgst:-false}, 00:22:43.781 "ddgst": ${ddgst:-false} 00:22:43.781 }, 00:22:43.781 "method": "bdev_nvme_attach_controller" 00:22:43.781 } 00:22:43.781 EOF 00:22:43.781 )") 00:22:43.781 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.781 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.781 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.781 { 00:22:43.781 "params": { 00:22:43.781 "name": "Nvme$subsystem", 00:22:43.781 "trtype": "$TEST_TRANSPORT", 00:22:43.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.781 "adrfam": "ipv4", 00:22:43.781 "trsvcid": "$NVMF_PORT", 00:22:43.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.781 "hdgst": ${hdgst:-false}, 00:22:43.781 "ddgst": ${ddgst:-false} 00:22:43.781 }, 00:22:43.781 "method": "bdev_nvme_attach_controller" 00:22:43.781 } 00:22:43.781 EOF 00:22:43.781 )") 00:22:43.781 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.781 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.781 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.781 { 00:22:43.781 "params": { 00:22:43.781 "name": "Nvme$subsystem", 00:22:43.781 "trtype": "$TEST_TRANSPORT", 00:22:43.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.781 "adrfam": "ipv4", 00:22:43.781 "trsvcid": "$NVMF_PORT", 00:22:43.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.781 "hdgst": ${hdgst:-false}, 00:22:43.781 "ddgst": ${ddgst:-false} 00:22:43.781 }, 00:22:43.781 "method": "bdev_nvme_attach_controller" 00:22:43.781 } 00:22:43.781 EOF 00:22:43.781 )") 00:22:43.781 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.781 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.781 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.781 { 00:22:43.781 "params": { 00:22:43.781 "name": "Nvme$subsystem", 00:22:43.781 "trtype": "$TEST_TRANSPORT", 00:22:43.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.781 "adrfam": "ipv4", 00:22:43.781 "trsvcid": "$NVMF_PORT", 00:22:43.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.781 "hdgst": ${hdgst:-false}, 00:22:43.781 "ddgst": ${ddgst:-false} 00:22:43.781 }, 00:22:43.781 "method": "bdev_nvme_attach_controller" 00:22:43.781 } 00:22:43.781 EOF 00:22:43.781 )") 00:22:43.781 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.781 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.782 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.782 { 00:22:43.782 "params": { 00:22:43.782 "name": "Nvme$subsystem", 00:22:43.782 "trtype": "$TEST_TRANSPORT", 00:22:43.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.782 "adrfam": "ipv4", 00:22:43.782 "trsvcid": "$NVMF_PORT", 00:22:43.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.782 "hdgst": ${hdgst:-false}, 00:22:43.782 "ddgst": ${ddgst:-false} 00:22:43.782 }, 00:22:43.782 "method": "bdev_nvme_attach_controller" 00:22:43.782 } 00:22:43.782 EOF 00:22:43.782 )") 00:22:43.782 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.782 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.782 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.782 { 00:22:43.782 "params": { 00:22:43.782 "name": "Nvme$subsystem", 00:22:43.782 "trtype": "$TEST_TRANSPORT", 00:22:43.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.782 "adrfam": "ipv4", 00:22:43.782 "trsvcid": "$NVMF_PORT", 00:22:43.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.782 "hdgst": ${hdgst:-false}, 00:22:43.782 "ddgst": ${ddgst:-false} 00:22:43.782 }, 00:22:43.782 "method": "bdev_nvme_attach_controller" 00:22:43.782 } 00:22:43.782 EOF 00:22:43.782 )") 00:22:43.782 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.782 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.782 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.782 { 00:22:43.782 "params": { 00:22:43.782 "name": "Nvme$subsystem", 00:22:43.782 "trtype": "$TEST_TRANSPORT", 00:22:43.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.782 "adrfam": "ipv4", 00:22:43.782 "trsvcid": "$NVMF_PORT", 00:22:43.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.782 "hdgst": ${hdgst:-false}, 00:22:43.782 "ddgst": ${ddgst:-false} 00:22:43.782 }, 00:22:43.782 "method": "bdev_nvme_attach_controller" 00:22:43.782 } 00:22:43.782 EOF 00:22:43.782 )") 00:22:43.782 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.782 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.782 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.782 { 00:22:43.782 "params": { 00:22:43.782 "name": "Nvme$subsystem", 00:22:43.782 "trtype": "$TEST_TRANSPORT", 00:22:43.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.782 "adrfam": "ipv4", 00:22:43.782 "trsvcid": "$NVMF_PORT", 00:22:43.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.782 "hdgst": ${hdgst:-false}, 00:22:43.782 "ddgst": ${ddgst:-false} 00:22:43.782 }, 00:22:43.782 "method": "bdev_nvme_attach_controller" 00:22:43.782 } 00:22:43.782 EOF 00:22:43.782 )") 00:22:43.782 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.782 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.782 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.782 { 00:22:43.782 "params": { 00:22:43.782 "name": "Nvme$subsystem", 00:22:43.782 "trtype": "$TEST_TRANSPORT", 00:22:43.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.782 "adrfam": "ipv4", 00:22:43.782 "trsvcid": "$NVMF_PORT", 00:22:43.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.782 "hdgst": ${hdgst:-false}, 00:22:43.782 "ddgst": ${ddgst:-false} 00:22:43.782 }, 00:22:43.782 "method": "bdev_nvme_attach_controller" 00:22:43.782 } 00:22:43.782 EOF 00:22:43.782 )") 00:22:43.782 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.782 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.782 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.782 { 00:22:43.782 "params": { 00:22:43.782 "name": "Nvme$subsystem", 00:22:43.782 "trtype": "$TEST_TRANSPORT", 00:22:43.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.782 "adrfam": "ipv4", 00:22:43.782 "trsvcid": "$NVMF_PORT", 00:22:43.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.782 "hdgst": ${hdgst:-false}, 00:22:43.782 "ddgst": ${ddgst:-false} 00:22:43.782 }, 00:22:43.782 "method": "bdev_nvme_attach_controller" 00:22:43.782 } 00:22:43.782 EOF 00:22:43.782 )") 00:22:43.782 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.782 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:43.782 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:43.782 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:43.782 "params": { 00:22:43.782 "name": "Nvme1", 00:22:43.782 "trtype": "tcp", 00:22:43.782 "traddr": "10.0.0.2", 00:22:43.782 "adrfam": "ipv4", 00:22:43.782 "trsvcid": "4420", 00:22:43.782 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.782 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.782 "hdgst": false, 00:22:43.782 "ddgst": false 00:22:43.782 }, 00:22:43.782 "method": "bdev_nvme_attach_controller" 00:22:43.782 },{ 00:22:43.782 "params": { 00:22:43.782 "name": "Nvme2", 00:22:43.782 "trtype": "tcp", 00:22:43.782 "traddr": "10.0.0.2", 00:22:43.782 "adrfam": "ipv4", 00:22:43.782 "trsvcid": "4420", 00:22:43.782 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:43.782 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:43.782 "hdgst": false, 00:22:43.782 "ddgst": false 00:22:43.782 }, 00:22:43.782 "method": "bdev_nvme_attach_controller" 00:22:43.782 },{ 00:22:43.782 "params": { 00:22:43.782 "name": "Nvme3", 00:22:43.782 "trtype": "tcp", 00:22:43.782 "traddr": "10.0.0.2", 00:22:43.782 "adrfam": "ipv4", 00:22:43.782 "trsvcid": "4420", 00:22:43.782 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:43.782 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:43.782 "hdgst": false, 00:22:43.782 "ddgst": false 00:22:43.782 }, 00:22:43.782 "method": "bdev_nvme_attach_controller" 00:22:43.782 },{ 00:22:43.782 "params": { 00:22:43.782 "name": "Nvme4", 00:22:43.782 "trtype": "tcp", 00:22:43.782 "traddr": "10.0.0.2", 00:22:43.782 "adrfam": "ipv4", 00:22:43.782 "trsvcid": "4420", 00:22:43.782 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:43.782 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:43.782 "hdgst": false, 00:22:43.782 "ddgst": false 00:22:43.782 }, 00:22:43.782 "method": "bdev_nvme_attach_controller" 00:22:43.782 },{ 00:22:43.782 "params": { 00:22:43.782 "name": "Nvme5", 00:22:43.782 "trtype": "tcp", 00:22:43.782 "traddr": "10.0.0.2", 00:22:43.782 "adrfam": "ipv4", 00:22:43.782 "trsvcid": "4420", 00:22:43.782 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:43.782 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:43.782 "hdgst": false, 00:22:43.782 "ddgst": false 00:22:43.782 }, 00:22:43.782 "method": "bdev_nvme_attach_controller" 00:22:43.782 },{ 00:22:43.782 "params": { 00:22:43.782 "name": "Nvme6", 00:22:43.782 "trtype": "tcp", 00:22:43.782 "traddr": "10.0.0.2", 00:22:43.782 "adrfam": "ipv4", 00:22:43.782 "trsvcid": "4420", 00:22:43.782 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:43.782 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:43.782 "hdgst": false, 00:22:43.782 "ddgst": false 00:22:43.782 }, 00:22:43.782 "method": "bdev_nvme_attach_controller" 00:22:43.782 },{ 00:22:43.782 "params": { 00:22:43.783 "name": "Nvme7", 00:22:43.783 "trtype": "tcp", 00:22:43.783 "traddr": "10.0.0.2", 00:22:43.783 "adrfam": "ipv4", 00:22:43.783 "trsvcid": "4420", 00:22:43.783 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:43.783 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:43.783 "hdgst": false, 00:22:43.783 "ddgst": false 00:22:43.783 }, 00:22:43.783 "method": "bdev_nvme_attach_controller" 00:22:43.783 },{ 00:22:43.783 "params": { 00:22:43.783 "name": "Nvme8", 00:22:43.783 "trtype": "tcp", 00:22:43.783 "traddr": "10.0.0.2", 00:22:43.783 "adrfam": "ipv4", 00:22:43.783 "trsvcid": "4420", 00:22:43.783 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:43.783 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:43.783 "hdgst": false, 00:22:43.783 "ddgst": false 00:22:43.783 }, 00:22:43.783 "method": "bdev_nvme_attach_controller" 00:22:43.783 },{ 00:22:43.783 "params": { 00:22:43.783 "name": "Nvme9", 00:22:43.783 "trtype": "tcp", 00:22:43.783 "traddr": "10.0.0.2", 00:22:43.783 "adrfam": "ipv4", 00:22:43.783 "trsvcid": "4420", 00:22:43.783 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:43.783 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:43.783 "hdgst": false, 00:22:43.783 "ddgst": false 00:22:43.783 }, 00:22:43.783 "method": "bdev_nvme_attach_controller" 00:22:43.783 },{ 00:22:43.783 "params": { 00:22:43.783 "name": "Nvme10", 00:22:43.783 "trtype": "tcp", 00:22:43.783 "traddr": "10.0.0.2", 00:22:43.783 "adrfam": "ipv4", 00:22:43.783 "trsvcid": "4420", 00:22:43.783 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:43.783 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:43.783 "hdgst": false, 00:22:43.783 "ddgst": false 00:22:43.783 }, 00:22:43.783 "method": "bdev_nvme_attach_controller" 00:22:43.783 }' 00:22:43.783 [2024-11-19 10:50:31.034652] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:22:43.783 [2024-11-19 10:50:31.034769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1389577 ] 00:22:43.783 [2024-11-19 10:50:31.107923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.783 [2024-11-19 10:50:31.170593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.157 Running I/O for 1 seconds... 00:22:46.090 1864.00 IOPS, 116.50 MiB/s 00:22:46.090 Latency(us) 00:22:46.090 [2024-11-19T09:50:33.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.091 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.091 Verification LBA range: start 0x0 length 0x400 00:22:46.091 Nvme1n1 : 1.10 239.77 14.99 0.00 0.00 262498.34 6650.69 246997.90 00:22:46.091 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.091 Verification LBA range: start 0x0 length 0x400 00:22:46.091 Nvme2n1 : 1.09 234.37 14.65 0.00 0.00 265594.69 18544.26 256318.58 00:22:46.091 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.091 Verification LBA range: start 0x0 length 0x400 00:22:46.091 Nvme3n1 : 1.08 236.29 14.77 0.00 0.00 258920.87 22622.06 268746.15 00:22:46.091 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.091 Verification LBA range: start 0x0 length 0x400 00:22:46.091 Nvme4n1 : 1.10 231.91 14.49 0.00 0.00 259167.95 19418.07 260978.92 00:22:46.091 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.091 Verification LBA range: start 0x0 length 0x400 00:22:46.091 Nvme5n1 : 1.12 229.15 14.32 0.00 0.00 258279.35 35535.08 237677.23 00:22:46.091 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.091 Verification LBA range: start 0x0 length 0x400 00:22:46.091 Nvme6n1 : 1.12 228.00 14.25 0.00 0.00 254945.66 19709.35 257872.02 00:22:46.091 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.091 Verification LBA range: start 0x0 length 0x400 00:22:46.091 Nvme7n1 : 1.12 232.26 14.52 0.00 0.00 244656.90 3737.98 254765.13 00:22:46.091 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.091 Verification LBA range: start 0x0 length 0x400 00:22:46.091 Nvme8n1 : 1.19 268.53 16.78 0.00 0.00 210704.23 8446.86 245444.46 00:22:46.091 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.091 Verification LBA range: start 0x0 length 0x400 00:22:46.091 Nvme9n1 : 1.20 267.77 16.74 0.00 0.00 207823.83 11116.85 262532.36 00:22:46.091 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.091 Verification LBA range: start 0x0 length 0x400 00:22:46.091 Nvme10n1 : 1.17 218.89 13.68 0.00 0.00 248411.02 20777.34 278066.82 00:22:46.091 [2024-11-19T09:50:33.714Z] =================================================================================================================== 00:22:46.091 [2024-11-19T09:50:33.714Z] Total : 2386.93 149.18 0.00 0.00 245348.57 3737.98 278066.82 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:46.349 rmmod nvme_tcp 00:22:46.349 rmmod nvme_fabrics 00:22:46.349 rmmod nvme_keyring 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1389088 ']' 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1389088 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1389088 ']' 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1389088 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1389088 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1389088' 00:22:46.349 killing process with pid 1389088 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1389088 00:22:46.349 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1389088 00:22:46.916 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:46.916 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:46.916 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:46.916 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:46.916 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:46.916 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:46.916 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:46.916 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:46.916 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:46.916 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.916 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.916 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.822 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:48.822 00:22:48.822 real 0m11.904s 00:22:48.822 user 0m33.789s 00:22:48.822 sys 0m3.398s 00:22:48.822 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:48.822 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:48.822 ************************************ 00:22:48.822 END TEST nvmf_shutdown_tc1 00:22:48.822 ************************************ 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:49.081 ************************************ 00:22:49.081 START TEST nvmf_shutdown_tc2 00:22:49.081 ************************************ 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.081 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:49.082 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:49.082 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:49.082 Found net devices under 0000:09:00.0: cvl_0_0 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:49.082 Found net devices under 0000:09:00.1: cvl_0_1 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:49.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:22:49.082 00:22:49.082 --- 10.0.0.2 ping statistics --- 00:22:49.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.082 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:22:49.082 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:22:49.083 00:22:49.083 --- 10.0.0.1 ping statistics --- 00:22:49.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.083 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:22:49.083 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.083 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:49.083 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:49.083 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.083 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:49.083 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:49.083 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.083 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:49.083 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:49.083 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:49.083 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:49.083 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:49.083 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.083 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1390341 00:22:49.083 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:49.083 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1390341 00:22:49.083 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1390341 ']' 00:22:49.083 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.083 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.083 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.083 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.083 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.341 [2024-11-19 10:50:36.746603] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:22:49.341 [2024-11-19 10:50:36.746678] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.341 [2024-11-19 10:50:36.820911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:49.341 [2024-11-19 10:50:36.881635] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.341 [2024-11-19 10:50:36.881691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.341 [2024-11-19 10:50:36.881704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.341 [2024-11-19 10:50:36.881716] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.341 [2024-11-19 10:50:36.881725] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.341 [2024-11-19 10:50:36.883336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.341 [2024-11-19 10:50:36.883401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.341 [2024-11-19 10:50:36.883464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:49.341 [2024-11-19 10:50:36.883468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.600 [2024-11-19 10:50:37.043298] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.600 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.600 Malloc1 00:22:49.600 [2024-11-19 10:50:37.146471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.600 Malloc2 00:22:49.859 Malloc3 00:22:49.859 Malloc4 00:22:49.859 Malloc5 00:22:49.859 Malloc6 00:22:49.859 Malloc7 00:22:49.859 Malloc8 00:22:50.118 Malloc9 00:22:50.118 Malloc10 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1390522 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1390522 /var/tmp/bdevperf.sock 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1390522 ']' 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.118 { 00:22:50.118 "params": { 00:22:50.118 "name": "Nvme$subsystem", 00:22:50.118 "trtype": "$TEST_TRANSPORT", 00:22:50.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.118 "adrfam": "ipv4", 00:22:50.118 "trsvcid": "$NVMF_PORT", 00:22:50.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.118 "hdgst": ${hdgst:-false}, 00:22:50.118 "ddgst": ${ddgst:-false} 00:22:50.118 }, 00:22:50.118 "method": "bdev_nvme_attach_controller" 00:22:50.118 } 00:22:50.118 EOF 00:22:50.118 )") 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.118 { 00:22:50.118 "params": { 00:22:50.118 "name": "Nvme$subsystem", 00:22:50.118 "trtype": "$TEST_TRANSPORT", 00:22:50.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.118 "adrfam": "ipv4", 00:22:50.118 "trsvcid": "$NVMF_PORT", 00:22:50.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.118 "hdgst": ${hdgst:-false}, 00:22:50.118 "ddgst": ${ddgst:-false} 00:22:50.118 }, 00:22:50.118 "method": "bdev_nvme_attach_controller" 00:22:50.118 } 00:22:50.118 EOF 00:22:50.118 )") 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.118 { 00:22:50.118 "params": { 00:22:50.118 "name": "Nvme$subsystem", 00:22:50.118 "trtype": "$TEST_TRANSPORT", 00:22:50.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.118 "adrfam": "ipv4", 00:22:50.118 "trsvcid": "$NVMF_PORT", 00:22:50.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.118 "hdgst": ${hdgst:-false}, 00:22:50.118 "ddgst": ${ddgst:-false} 00:22:50.118 }, 00:22:50.118 "method": "bdev_nvme_attach_controller" 00:22:50.118 } 00:22:50.118 EOF 00:22:50.118 )") 00:22:50.118 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.119 { 00:22:50.119 "params": { 00:22:50.119 "name": "Nvme$subsystem", 00:22:50.119 "trtype": "$TEST_TRANSPORT", 00:22:50.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.119 "adrfam": "ipv4", 00:22:50.119 "trsvcid": "$NVMF_PORT", 00:22:50.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.119 "hdgst": ${hdgst:-false}, 00:22:50.119 "ddgst": ${ddgst:-false} 00:22:50.119 }, 00:22:50.119 "method": "bdev_nvme_attach_controller" 00:22:50.119 } 00:22:50.119 EOF 00:22:50.119 )") 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.119 { 00:22:50.119 "params": { 00:22:50.119 "name": "Nvme$subsystem", 00:22:50.119 "trtype": "$TEST_TRANSPORT", 00:22:50.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.119 "adrfam": "ipv4", 00:22:50.119 "trsvcid": "$NVMF_PORT", 00:22:50.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.119 "hdgst": ${hdgst:-false}, 00:22:50.119 "ddgst": ${ddgst:-false} 00:22:50.119 }, 00:22:50.119 "method": "bdev_nvme_attach_controller" 00:22:50.119 } 00:22:50.119 EOF 00:22:50.119 )") 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.119 { 00:22:50.119 "params": { 00:22:50.119 "name": "Nvme$subsystem", 00:22:50.119 "trtype": "$TEST_TRANSPORT", 00:22:50.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.119 "adrfam": "ipv4", 00:22:50.119 "trsvcid": "$NVMF_PORT", 00:22:50.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.119 "hdgst": ${hdgst:-false}, 00:22:50.119 "ddgst": ${ddgst:-false} 00:22:50.119 }, 00:22:50.119 "method": "bdev_nvme_attach_controller" 00:22:50.119 } 00:22:50.119 EOF 00:22:50.119 )") 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.119 { 00:22:50.119 "params": { 00:22:50.119 "name": "Nvme$subsystem", 00:22:50.119 "trtype": "$TEST_TRANSPORT", 00:22:50.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.119 "adrfam": "ipv4", 00:22:50.119 "trsvcid": "$NVMF_PORT", 00:22:50.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.119 "hdgst": ${hdgst:-false}, 00:22:50.119 "ddgst": ${ddgst:-false} 00:22:50.119 }, 00:22:50.119 "method": "bdev_nvme_attach_controller" 00:22:50.119 } 00:22:50.119 EOF 00:22:50.119 )") 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.119 { 00:22:50.119 "params": { 00:22:50.119 "name": "Nvme$subsystem", 00:22:50.119 "trtype": "$TEST_TRANSPORT", 00:22:50.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.119 "adrfam": "ipv4", 00:22:50.119 "trsvcid": "$NVMF_PORT", 00:22:50.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.119 "hdgst": ${hdgst:-false}, 00:22:50.119 "ddgst": ${ddgst:-false} 00:22:50.119 }, 00:22:50.119 "method": "bdev_nvme_attach_controller" 00:22:50.119 } 00:22:50.119 EOF 00:22:50.119 )") 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.119 { 00:22:50.119 "params": { 00:22:50.119 "name": "Nvme$subsystem", 00:22:50.119 "trtype": "$TEST_TRANSPORT", 00:22:50.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.119 "adrfam": "ipv4", 00:22:50.119 "trsvcid": "$NVMF_PORT", 00:22:50.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.119 "hdgst": ${hdgst:-false}, 00:22:50.119 "ddgst": ${ddgst:-false} 00:22:50.119 }, 00:22:50.119 "method": "bdev_nvme_attach_controller" 00:22:50.119 } 00:22:50.119 EOF 00:22:50.119 )") 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.119 { 00:22:50.119 "params": { 00:22:50.119 "name": "Nvme$subsystem", 00:22:50.119 "trtype": "$TEST_TRANSPORT", 00:22:50.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.119 "adrfam": "ipv4", 00:22:50.119 "trsvcid": "$NVMF_PORT", 00:22:50.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.119 "hdgst": ${hdgst:-false}, 00:22:50.119 "ddgst": ${ddgst:-false} 00:22:50.119 }, 00:22:50.119 "method": "bdev_nvme_attach_controller" 00:22:50.119 } 00:22:50.119 EOF 00:22:50.119 )") 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:50.119 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:50.119 "params": { 00:22:50.119 "name": "Nvme1", 00:22:50.119 "trtype": "tcp", 00:22:50.119 "traddr": "10.0.0.2", 00:22:50.119 "adrfam": "ipv4", 00:22:50.119 "trsvcid": "4420", 00:22:50.119 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.119 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:50.119 "hdgst": false, 00:22:50.119 "ddgst": false 00:22:50.119 }, 00:22:50.119 "method": "bdev_nvme_attach_controller" 00:22:50.119 },{ 00:22:50.119 "params": { 00:22:50.119 "name": "Nvme2", 00:22:50.119 "trtype": "tcp", 00:22:50.119 "traddr": "10.0.0.2", 00:22:50.119 "adrfam": "ipv4", 00:22:50.119 "trsvcid": "4420", 00:22:50.119 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:50.119 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:50.119 "hdgst": false, 00:22:50.119 "ddgst": false 00:22:50.119 }, 00:22:50.119 "method": "bdev_nvme_attach_controller" 00:22:50.119 },{ 00:22:50.119 "params": { 00:22:50.119 "name": "Nvme3", 00:22:50.119 "trtype": "tcp", 00:22:50.119 "traddr": "10.0.0.2", 00:22:50.119 "adrfam": "ipv4", 00:22:50.119 "trsvcid": "4420", 00:22:50.119 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:50.119 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:50.119 "hdgst": false, 00:22:50.119 "ddgst": false 00:22:50.119 }, 00:22:50.119 "method": "bdev_nvme_attach_controller" 00:22:50.119 },{ 00:22:50.119 "params": { 00:22:50.119 "name": "Nvme4", 00:22:50.119 "trtype": "tcp", 00:22:50.119 "traddr": "10.0.0.2", 00:22:50.119 "adrfam": "ipv4", 00:22:50.119 "trsvcid": "4420", 00:22:50.119 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:50.119 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:50.119 "hdgst": false, 00:22:50.119 "ddgst": false 00:22:50.119 }, 00:22:50.119 "method": "bdev_nvme_attach_controller" 00:22:50.119 },{ 00:22:50.120 "params": { 00:22:50.120 "name": "Nvme5", 00:22:50.120 "trtype": "tcp", 00:22:50.120 "traddr": "10.0.0.2", 00:22:50.120 "adrfam": "ipv4", 00:22:50.120 "trsvcid": "4420", 00:22:50.120 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:50.120 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:50.120 "hdgst": false, 00:22:50.120 "ddgst": false 00:22:50.120 }, 00:22:50.120 "method": "bdev_nvme_attach_controller" 00:22:50.120 },{ 00:22:50.120 "params": { 00:22:50.120 "name": "Nvme6", 00:22:50.120 "trtype": "tcp", 00:22:50.120 "traddr": "10.0.0.2", 00:22:50.120 "adrfam": "ipv4", 00:22:50.120 "trsvcid": "4420", 00:22:50.120 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:50.120 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:50.120 "hdgst": false, 00:22:50.120 "ddgst": false 00:22:50.120 }, 00:22:50.120 "method": "bdev_nvme_attach_controller" 00:22:50.120 },{ 00:22:50.120 "params": { 00:22:50.120 "name": "Nvme7", 00:22:50.120 "trtype": "tcp", 00:22:50.120 "traddr": "10.0.0.2", 00:22:50.120 "adrfam": "ipv4", 00:22:50.120 "trsvcid": "4420", 00:22:50.120 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:50.120 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:50.120 "hdgst": false, 00:22:50.120 "ddgst": false 00:22:50.120 }, 00:22:50.120 "method": "bdev_nvme_attach_controller" 00:22:50.120 },{ 00:22:50.120 "params": { 00:22:50.120 "name": "Nvme8", 00:22:50.120 "trtype": "tcp", 00:22:50.120 "traddr": "10.0.0.2", 00:22:50.120 "adrfam": "ipv4", 00:22:50.120 "trsvcid": "4420", 00:22:50.120 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:50.120 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:50.120 "hdgst": false, 00:22:50.120 "ddgst": false 00:22:50.120 }, 00:22:50.120 "method": "bdev_nvme_attach_controller" 00:22:50.120 },{ 00:22:50.120 "params": { 00:22:50.120 "name": "Nvme9", 00:22:50.120 "trtype": "tcp", 00:22:50.120 "traddr": "10.0.0.2", 00:22:50.120 "adrfam": "ipv4", 00:22:50.120 "trsvcid": "4420", 00:22:50.120 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:50.120 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:50.120 "hdgst": false, 00:22:50.120 "ddgst": false 00:22:50.120 }, 00:22:50.120 "method": "bdev_nvme_attach_controller" 00:22:50.120 },{ 00:22:50.120 "params": { 00:22:50.120 "name": "Nvme10", 00:22:50.120 "trtype": "tcp", 00:22:50.120 "traddr": "10.0.0.2", 00:22:50.120 "adrfam": "ipv4", 00:22:50.120 "trsvcid": "4420", 00:22:50.120 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:50.120 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:50.120 "hdgst": false, 00:22:50.120 "ddgst": false 00:22:50.120 }, 00:22:50.120 "method": "bdev_nvme_attach_controller" 00:22:50.120 }' 00:22:50.120 [2024-11-19 10:50:37.650863] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:22:50.120 [2024-11-19 10:50:37.650957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1390522 ] 00:22:50.120 [2024-11-19 10:50:37.723137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.378 [2024-11-19 10:50:37.783484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.278 Running I/O for 10 seconds... 00:22:52.278 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.278 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:52.278 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:52.278 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.278 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:52.278 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.278 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:52.278 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:52.278 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:52.278 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:52.278 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:52.278 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:52.278 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:52.278 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:52.278 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:52.278 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.278 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:52.278 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.278 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:52.278 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:52.278 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:52.537 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:52.537 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:52.537 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:52.537 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:52.537 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.537 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:52.794 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.794 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:52.795 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:52.795 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:53.053 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:53.053 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:53.053 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:53.053 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.053 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:53.053 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:53.053 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.053 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:53.053 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:53.053 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:53.053 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:53.053 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:53.053 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1390522 00:22:53.053 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1390522 ']' 00:22:53.053 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1390522 00:22:53.053 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:53.053 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:53.053 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1390522 00:22:53.053 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:53.053 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:53.053 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1390522' 00:22:53.053 killing process with pid 1390522 00:22:53.053 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1390522 00:22:53.053 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1390522 00:22:53.053 Received shutdown signal, test time was about 0.973226 seconds 00:22:53.053 00:22:53.053 Latency(us) 00:22:53.053 [2024-11-19T09:50:40.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.053 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.053 Verification LBA range: start 0x0 length 0x400 00:22:53.053 Nvme1n1 : 0.95 201.48 12.59 0.00 0.00 314132.10 23690.05 281173.71 00:22:53.053 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.053 Verification LBA range: start 0x0 length 0x400 00:22:53.053 Nvme2n1 : 0.97 264.24 16.51 0.00 0.00 234755.41 19515.16 246997.90 00:22:53.053 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.053 Verification LBA range: start 0x0 length 0x400 00:22:53.053 Nvme3n1 : 0.96 266.16 16.64 0.00 0.00 228431.64 24855.13 245444.46 00:22:53.053 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.053 Verification LBA range: start 0x0 length 0x400 00:22:53.053 Nvme4n1 : 0.96 265.58 16.60 0.00 0.00 224158.34 17670.45 251658.24 00:22:53.053 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.053 Verification LBA range: start 0x0 length 0x400 00:22:53.053 Nvme5n1 : 0.93 206.51 12.91 0.00 0.00 281628.32 19223.89 236123.78 00:22:53.053 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.053 Verification LBA range: start 0x0 length 0x400 00:22:53.053 Nvme6n1 : 0.94 204.40 12.78 0.00 0.00 279045.56 21748.24 259425.47 00:22:53.053 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.053 Verification LBA range: start 0x0 length 0x400 00:22:53.053 Nvme7n1 : 0.97 263.27 16.45 0.00 0.00 213096.11 17670.45 276513.37 00:22:53.053 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.053 Verification LBA range: start 0x0 length 0x400 00:22:53.053 Nvme8n1 : 0.93 216.67 13.54 0.00 0.00 248127.42 9417.77 250104.79 00:22:53.053 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.053 Verification LBA range: start 0x0 length 0x400 00:22:53.053 Nvme9n1 : 0.95 202.34 12.65 0.00 0.00 264612.98 21942.42 260978.92 00:22:53.053 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.053 Verification LBA range: start 0x0 length 0x400 00:22:53.053 Nvme10n1 : 0.96 200.79 12.55 0.00 0.00 261152.11 21942.42 287387.50 00:22:53.053 [2024-11-19T09:50:40.676Z] =================================================================================================================== 00:22:53.053 [2024-11-19T09:50:40.676Z] Total : 2291.44 143.21 0.00 0.00 251392.68 9417.77 287387.50 00:22:53.312 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:54.246 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1390341 00:22:54.246 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:54.246 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:54.246 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:54.246 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:54.246 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:54.246 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:54.246 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:54.246 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:54.246 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:54.246 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:54.246 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:54.246 rmmod nvme_tcp 00:22:54.246 rmmod nvme_fabrics 00:22:54.504 rmmod nvme_keyring 00:22:54.504 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:54.504 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:54.504 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:54.504 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1390341 ']' 00:22:54.504 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1390341 00:22:54.504 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1390341 ']' 00:22:54.504 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1390341 00:22:54.504 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:54.504 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.504 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1390341 00:22:54.504 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:54.504 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:54.504 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1390341' 00:22:54.504 killing process with pid 1390341 00:22:54.504 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1390341 00:22:54.504 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1390341 00:22:55.072 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:55.072 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:55.072 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:55.072 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:55.072 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:55.072 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:55.072 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:55.072 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:55.072 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:55.072 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.072 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:55.072 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:56.977 00:22:56.977 real 0m7.981s 00:22:56.977 user 0m24.938s 00:22:56.977 sys 0m1.503s 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:56.977 ************************************ 00:22:56.977 END TEST nvmf_shutdown_tc2 00:22:56.977 ************************************ 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:56.977 ************************************ 00:22:56.977 START TEST nvmf_shutdown_tc3 00:22:56.977 ************************************ 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.977 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:56.978 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:56.978 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:56.978 Found net devices under 0000:09:00.0: cvl_0_0 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:56.978 Found net devices under 0000:09:00.1: cvl_0_1 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:56.978 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.237 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.237 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.237 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:57.237 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.237 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.237 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.237 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:57.237 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:57.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:22:57.237 00:22:57.237 --- 10.0.0.2 ping statistics --- 00:22:57.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.237 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:22:57.237 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:22:57.237 00:22:57.237 --- 10.0.0.1 ping statistics --- 00:22:57.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.237 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:22:57.237 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.237 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:57.237 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:57.237 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.237 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:57.237 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:57.237 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.238 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:57.238 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:57.238 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:57.238 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:57.238 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:57.238 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.238 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1391441 00:22:57.238 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:57.238 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1391441 00:22:57.238 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1391441 ']' 00:22:57.238 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.238 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:57.238 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.238 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:57.238 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.238 [2024-11-19 10:50:44.770859] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:22:57.238 [2024-11-19 10:50:44.770927] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.238 [2024-11-19 10:50:44.840738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:57.496 [2024-11-19 10:50:44.897035] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.496 [2024-11-19 10:50:44.897086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.496 [2024-11-19 10:50:44.897108] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.496 [2024-11-19 10:50:44.897118] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.496 [2024-11-19 10:50:44.897127] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.496 [2024-11-19 10:50:44.898610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.496 [2024-11-19 10:50:44.898667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:57.497 [2024-11-19 10:50:44.898752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:57.497 [2024-11-19 10:50:44.898755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.497 [2024-11-19 10:50:45.045767] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.497 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.755 Malloc1 00:22:57.755 [2024-11-19 10:50:45.150113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.755 Malloc2 00:22:57.755 Malloc3 00:22:57.755 Malloc4 00:22:57.755 Malloc5 00:22:57.755 Malloc6 00:22:58.013 Malloc7 00:22:58.013 Malloc8 00:22:58.013 Malloc9 00:22:58.013 Malloc10 00:22:58.013 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.013 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:58.013 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:58.013 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:58.013 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1391619 00:22:58.013 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1391619 /var/tmp/bdevperf.sock 00:22:58.013 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1391619 ']' 00:22:58.013 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.013 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:58.013 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:58.013 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.013 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:58.014 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.014 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:58.014 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.014 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.014 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:58.014 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.014 { 00:22:58.014 "params": { 00:22:58.014 "name": "Nvme$subsystem", 00:22:58.014 "trtype": "$TEST_TRANSPORT", 00:22:58.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.014 "adrfam": "ipv4", 00:22:58.014 "trsvcid": "$NVMF_PORT", 00:22:58.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.014 "hdgst": ${hdgst:-false}, 00:22:58.014 "ddgst": ${ddgst:-false} 00:22:58.014 }, 00:22:58.014 "method": "bdev_nvme_attach_controller" 00:22:58.014 } 00:22:58.014 EOF 00:22:58.014 )") 00:22:58.014 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.272 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.272 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.272 { 00:22:58.272 "params": { 00:22:58.272 "name": "Nvme$subsystem", 00:22:58.272 "trtype": "$TEST_TRANSPORT", 00:22:58.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.272 "adrfam": "ipv4", 00:22:58.272 "trsvcid": "$NVMF_PORT", 00:22:58.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.272 "hdgst": ${hdgst:-false}, 00:22:58.272 "ddgst": ${ddgst:-false} 00:22:58.272 }, 00:22:58.272 "method": "bdev_nvme_attach_controller" 00:22:58.272 } 00:22:58.272 EOF 00:22:58.272 )") 00:22:58.272 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.272 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.272 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.272 { 00:22:58.272 "params": { 00:22:58.272 "name": "Nvme$subsystem", 00:22:58.272 "trtype": "$TEST_TRANSPORT", 00:22:58.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.272 "adrfam": "ipv4", 00:22:58.272 "trsvcid": "$NVMF_PORT", 00:22:58.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.272 "hdgst": ${hdgst:-false}, 00:22:58.272 "ddgst": ${ddgst:-false} 00:22:58.272 }, 00:22:58.272 "method": "bdev_nvme_attach_controller" 00:22:58.273 } 00:22:58.273 EOF 00:22:58.273 )") 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.273 { 00:22:58.273 "params": { 00:22:58.273 "name": "Nvme$subsystem", 00:22:58.273 "trtype": "$TEST_TRANSPORT", 00:22:58.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.273 "adrfam": "ipv4", 00:22:58.273 "trsvcid": "$NVMF_PORT", 00:22:58.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.273 "hdgst": ${hdgst:-false}, 00:22:58.273 "ddgst": ${ddgst:-false} 00:22:58.273 }, 00:22:58.273 "method": "bdev_nvme_attach_controller" 00:22:58.273 } 00:22:58.273 EOF 00:22:58.273 )") 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.273 { 00:22:58.273 "params": { 00:22:58.273 "name": "Nvme$subsystem", 00:22:58.273 "trtype": "$TEST_TRANSPORT", 00:22:58.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.273 "adrfam": "ipv4", 00:22:58.273 "trsvcid": "$NVMF_PORT", 00:22:58.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.273 "hdgst": ${hdgst:-false}, 00:22:58.273 "ddgst": ${ddgst:-false} 00:22:58.273 }, 00:22:58.273 "method": "bdev_nvme_attach_controller" 00:22:58.273 } 00:22:58.273 EOF 00:22:58.273 )") 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.273 { 00:22:58.273 "params": { 00:22:58.273 "name": "Nvme$subsystem", 00:22:58.273 "trtype": "$TEST_TRANSPORT", 00:22:58.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.273 "adrfam": "ipv4", 00:22:58.273 "trsvcid": "$NVMF_PORT", 00:22:58.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.273 "hdgst": ${hdgst:-false}, 00:22:58.273 "ddgst": ${ddgst:-false} 00:22:58.273 }, 00:22:58.273 "method": "bdev_nvme_attach_controller" 00:22:58.273 } 00:22:58.273 EOF 00:22:58.273 )") 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.273 { 00:22:58.273 "params": { 00:22:58.273 "name": "Nvme$subsystem", 00:22:58.273 "trtype": "$TEST_TRANSPORT", 00:22:58.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.273 "adrfam": "ipv4", 00:22:58.273 "trsvcid": "$NVMF_PORT", 00:22:58.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.273 "hdgst": ${hdgst:-false}, 00:22:58.273 "ddgst": ${ddgst:-false} 00:22:58.273 }, 00:22:58.273 "method": "bdev_nvme_attach_controller" 00:22:58.273 } 00:22:58.273 EOF 00:22:58.273 )") 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.273 { 00:22:58.273 "params": { 00:22:58.273 "name": "Nvme$subsystem", 00:22:58.273 "trtype": "$TEST_TRANSPORT", 00:22:58.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.273 "adrfam": "ipv4", 00:22:58.273 "trsvcid": "$NVMF_PORT", 00:22:58.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.273 "hdgst": ${hdgst:-false}, 00:22:58.273 "ddgst": ${ddgst:-false} 00:22:58.273 }, 00:22:58.273 "method": "bdev_nvme_attach_controller" 00:22:58.273 } 00:22:58.273 EOF 00:22:58.273 )") 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.273 { 00:22:58.273 "params": { 00:22:58.273 "name": "Nvme$subsystem", 00:22:58.273 "trtype": "$TEST_TRANSPORT", 00:22:58.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.273 "adrfam": "ipv4", 00:22:58.273 "trsvcid": "$NVMF_PORT", 00:22:58.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.273 "hdgst": ${hdgst:-false}, 00:22:58.273 "ddgst": ${ddgst:-false} 00:22:58.273 }, 00:22:58.273 "method": "bdev_nvme_attach_controller" 00:22:58.273 } 00:22:58.273 EOF 00:22:58.273 )") 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.273 { 00:22:58.273 "params": { 00:22:58.273 "name": "Nvme$subsystem", 00:22:58.273 "trtype": "$TEST_TRANSPORT", 00:22:58.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.273 "adrfam": "ipv4", 00:22:58.273 "trsvcid": "$NVMF_PORT", 00:22:58.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.273 "hdgst": ${hdgst:-false}, 00:22:58.273 "ddgst": ${ddgst:-false} 00:22:58.273 }, 00:22:58.273 "method": "bdev_nvme_attach_controller" 00:22:58.273 } 00:22:58.273 EOF 00:22:58.273 )") 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:58.273 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:58.273 "params": { 00:22:58.273 "name": "Nvme1", 00:22:58.273 "trtype": "tcp", 00:22:58.273 "traddr": "10.0.0.2", 00:22:58.273 "adrfam": "ipv4", 00:22:58.273 "trsvcid": "4420", 00:22:58.273 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.273 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:58.273 "hdgst": false, 00:22:58.273 "ddgst": false 00:22:58.273 }, 00:22:58.273 "method": "bdev_nvme_attach_controller" 00:22:58.273 },{ 00:22:58.273 "params": { 00:22:58.273 "name": "Nvme2", 00:22:58.273 "trtype": "tcp", 00:22:58.273 "traddr": "10.0.0.2", 00:22:58.273 "adrfam": "ipv4", 00:22:58.273 "trsvcid": "4420", 00:22:58.273 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:58.273 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:58.273 "hdgst": false, 00:22:58.273 "ddgst": false 00:22:58.273 }, 00:22:58.273 "method": "bdev_nvme_attach_controller" 00:22:58.273 },{ 00:22:58.273 "params": { 00:22:58.273 "name": "Nvme3", 00:22:58.273 "trtype": "tcp", 00:22:58.273 "traddr": "10.0.0.2", 00:22:58.273 "adrfam": "ipv4", 00:22:58.273 "trsvcid": "4420", 00:22:58.273 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:58.273 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:58.273 "hdgst": false, 00:22:58.273 "ddgst": false 00:22:58.273 }, 00:22:58.273 "method": "bdev_nvme_attach_controller" 00:22:58.273 },{ 00:22:58.274 "params": { 00:22:58.274 "name": "Nvme4", 00:22:58.274 "trtype": "tcp", 00:22:58.274 "traddr": "10.0.0.2", 00:22:58.274 "adrfam": "ipv4", 00:22:58.274 "trsvcid": "4420", 00:22:58.274 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:58.274 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:58.274 "hdgst": false, 00:22:58.274 "ddgst": false 00:22:58.274 }, 00:22:58.274 "method": "bdev_nvme_attach_controller" 00:22:58.274 },{ 00:22:58.274 "params": { 00:22:58.274 "name": "Nvme5", 00:22:58.274 "trtype": "tcp", 00:22:58.274 "traddr": "10.0.0.2", 00:22:58.274 "adrfam": "ipv4", 00:22:58.274 "trsvcid": "4420", 00:22:58.274 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:58.274 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:58.274 "hdgst": false, 00:22:58.274 "ddgst": false 00:22:58.274 }, 00:22:58.274 "method": "bdev_nvme_attach_controller" 00:22:58.274 },{ 00:22:58.274 "params": { 00:22:58.274 "name": "Nvme6", 00:22:58.274 "trtype": "tcp", 00:22:58.274 "traddr": "10.0.0.2", 00:22:58.274 "adrfam": "ipv4", 00:22:58.274 "trsvcid": "4420", 00:22:58.274 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:58.274 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:58.274 "hdgst": false, 00:22:58.274 "ddgst": false 00:22:58.274 }, 00:22:58.274 "method": "bdev_nvme_attach_controller" 00:22:58.274 },{ 00:22:58.274 "params": { 00:22:58.274 "name": "Nvme7", 00:22:58.274 "trtype": "tcp", 00:22:58.274 "traddr": "10.0.0.2", 00:22:58.274 "adrfam": "ipv4", 00:22:58.274 "trsvcid": "4420", 00:22:58.274 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:58.274 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:58.274 "hdgst": false, 00:22:58.274 "ddgst": false 00:22:58.274 }, 00:22:58.274 "method": "bdev_nvme_attach_controller" 00:22:58.274 },{ 00:22:58.274 "params": { 00:22:58.274 "name": "Nvme8", 00:22:58.274 "trtype": "tcp", 00:22:58.274 "traddr": "10.0.0.2", 00:22:58.274 "adrfam": "ipv4", 00:22:58.274 "trsvcid": "4420", 00:22:58.274 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:58.274 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:58.274 "hdgst": false, 00:22:58.274 "ddgst": false 00:22:58.274 }, 00:22:58.274 "method": "bdev_nvme_attach_controller" 00:22:58.274 },{ 00:22:58.274 "params": { 00:22:58.274 "name": "Nvme9", 00:22:58.274 "trtype": "tcp", 00:22:58.274 "traddr": "10.0.0.2", 00:22:58.274 "adrfam": "ipv4", 00:22:58.274 "trsvcid": "4420", 00:22:58.274 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:58.274 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:58.274 "hdgst": false, 00:22:58.274 "ddgst": false 00:22:58.274 }, 00:22:58.274 "method": "bdev_nvme_attach_controller" 00:22:58.274 },{ 00:22:58.274 "params": { 00:22:58.274 "name": "Nvme10", 00:22:58.274 "trtype": "tcp", 00:22:58.274 "traddr": "10.0.0.2", 00:22:58.274 "adrfam": "ipv4", 00:22:58.274 "trsvcid": "4420", 00:22:58.274 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:58.274 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:58.274 "hdgst": false, 00:22:58.274 "ddgst": false 00:22:58.274 }, 00:22:58.274 "method": "bdev_nvme_attach_controller" 00:22:58.274 }' 00:22:58.274 [2024-11-19 10:50:45.680322] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:22:58.274 [2024-11-19 10:50:45.680406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1391619 ] 00:22:58.274 [2024-11-19 10:50:45.752782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.274 [2024-11-19 10:50:45.813470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.172 Running I/O for 10 seconds... 00:23:00.172 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.172 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:00.172 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:00.172 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.172 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:00.172 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.172 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:00.172 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:00.172 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:00.172 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:00.172 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:00.172 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:00.172 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:00.172 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:00.172 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:00.172 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:00.172 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.172 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:00.172 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.172 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=17 00:23:00.172 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 17 -ge 100 ']' 00:23:00.172 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:00.430 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:00.430 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:00.431 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:00.431 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:00.431 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.431 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:00.688 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.688 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=83 00:23:00.689 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 83 -ge 100 ']' 00:23:00.689 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:00.962 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:00.962 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:00.963 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:00.963 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:00.963 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.963 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:00.963 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.963 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:00.963 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:00.963 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:00.963 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:00.963 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:00.963 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1391441 00:23:00.963 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1391441 ']' 00:23:00.963 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1391441 00:23:00.963 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:00.963 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:00.963 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1391441 00:23:00.963 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:00.963 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:00.963 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1391441' 00:23:00.963 killing process with pid 1391441 00:23:00.963 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1391441 00:23:00.963 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1391441 00:23:00.963 [2024-11-19 10:50:48.400241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.963 [2024-11-19 10:50:48.400808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.400819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.400830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.400841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.400853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.400864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.400883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.400895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.400907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.400918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.400929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.400940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.400951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.400962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.400973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.400985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.401000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.401013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.401024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.401036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.401047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.401058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.401069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.401080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.401092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.401103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.401114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8f00 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.964 [2024-11-19 10:50:48.402970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.402981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.402993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.403392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a4e0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.404834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.404857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.404871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.404897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.404908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.404920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.404932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.404954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.404966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.404978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.404989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.405000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.405012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.405023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.405034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.405046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.405058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.405069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.405081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.405092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.405104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.405116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.405128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.405139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.405151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.965 [2024-11-19 10:50:48.405162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.405673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d93d0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.407375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.407408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.407422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.407434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.407446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.407459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.407471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.407487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.407520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.407534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.407546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.407558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.407569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.407582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.407602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.407613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.407625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.966 [2024-11-19 10:50:48.407637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.407990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.408001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.408012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.408024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.408036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.408047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.408059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.408070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.408085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.408098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.408110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.408121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.408133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.408144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.408155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.408166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.408178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.408190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d98a0 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.409480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.409514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.409528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.409541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.409553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.409565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.409577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.409589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.967 [2024-11-19 10:50:48.409606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.409998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.410010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.410021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.410033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.410045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.410060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.410072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.410084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.410096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.410108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.410120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.410132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.410144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.410155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.410167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.410179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.410191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.410202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.410214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.410226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.410237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.410249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.410261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.410273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9d90 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.411195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.411221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.411234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.411245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.968 [2024-11-19 10:50:48.411257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.969 [2024-11-19 10:50:48.411886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.411897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.411909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.411920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.411931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.411943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.411954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.411965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.411977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.411988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.411999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.412010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.412025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da260 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.414963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:1[2024-11-19 10:50:48.414985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.970 the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.415000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.415010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.970 [2024-11-19 10:50:48.415018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.415031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.415041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:1[2024-11-19 10:50:48.415044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.970 the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.415059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with [2024-11-19 10:50:48.415059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:00.970 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.970 [2024-11-19 10:50:48.415073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.415079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.970 [2024-11-19 10:50:48.415086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.415094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.970 [2024-11-19 10:50:48.415099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.415110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1[2024-11-19 10:50:48.415112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.970 the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.415128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with [2024-11-19 10:50:48.415128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:00.970 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.970 [2024-11-19 10:50:48.415143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.415146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.970 [2024-11-19 10:50:48.415155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.415162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.970 [2024-11-19 10:50:48.415169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.970 [2024-11-19 10:50:48.415179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.970 [2024-11-19 10:50:48.415182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 10:50:48.415195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.971 the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.971 [2024-11-19 10:50:48.415240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.971 [2024-11-19 10:50:48.415254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.971 [2024-11-19 10:50:48.415267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.971 [2024-11-19 10:50:48.415279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:1[2024-11-19 10:50:48.415291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.971 the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.971 [2024-11-19 10:50:48.415342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.971 [2024-11-19 10:50:48.415369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.971 [2024-11-19 10:50:48.415381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:1[2024-11-19 10:50:48.415393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.971 the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with [2024-11-19 10:50:48.415407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:00.971 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.971 [2024-11-19 10:50:48.415421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.971 [2024-11-19 10:50:48.415434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.971 [2024-11-19 10:50:48.415446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.971 [2024-11-19 10:50:48.415462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.971 [2024-11-19 10:50:48.415475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.971 [2024-11-19 10:50:48.415500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.971 [2024-11-19 10:50:48.415513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.971 [2024-11-19 10:50:48.415525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 10:50:48.415537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.971 the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469c90 is same with the state(6) to be set 00:23:00.971 [2024-11-19 10:50:48.415553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.971 [2024-11-19 10:50:48.415569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.971 [2024-11-19 10:50:48.415585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.971 [2024-11-19 10:50:48.415603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.971 [2024-11-19 10:50:48.415635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.971 [2024-11-19 10:50:48.415650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.971 [2024-11-19 10:50:48.415674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.971 [2024-11-19 10:50:48.415688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.971 [2024-11-19 10:50:48.415704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.971 [2024-11-19 10:50:48.415718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.971 [2024-11-19 10:50:48.415734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.971 [2024-11-19 10:50:48.415761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.971 [2024-11-19 10:50:48.415777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.972 [2024-11-19 10:50:48.415791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.972 [2024-11-19 10:50:48.415807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.972 [2024-11-19 10:50:48.415820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.972 [2024-11-19 10:50:48.415836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.972 [2024-11-19 10:50:48.415851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.972 [2024-11-19 10:50:48.415866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.972 [2024-11-19 10:50:48.415880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.972 [2024-11-19 10:50:48.415895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.972 [2024-11-19 10:50:48.415910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.972 [2024-11-19 10:50:48.415925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.972 [2024-11-19 10:50:48.415939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.972 [2024-11-19 10:50:48.415955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.972 [2024-11-19 10:50:48.415968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.972 [2024-11-19 10:50:48.415984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.972 [2024-11-19 10:50:48.415998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.972 [2024-11-19 10:50:48.416013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.972 [2024-11-19 10:50:48.416027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.972 [2024-11-19 10:50:48.416042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.972 [2024-11-19 10:50:48.416056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.972 [2024-11-19 10:50:48.416072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.972 [2024-11-19 10:50:48.416086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.972 [2024-11-19 10:50:48.416101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.972 [2024-11-19 10:50:48.416115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.972 [2024-11-19 10:50:48.416134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.972 [2024-11-19 10:50:48.416150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.972 [2024-11-19 10:50:48.416165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.972 [2024-11-19 10:50:48.416179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.972 [2024-11-19 10:50:48.416195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.972 [2024-11-19 10:50:48.416209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.972 [2024-11-19 10:50:48.416225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.972 [2024-11-19 10:50:48.416239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.972 [2024-11-19 10:50:48.416255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.972 [2024-11-19 10:50:48.416269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.972 [2024-11-19 10:50:48.416299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.972 [2024-11-19 10:50:48.416324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.972 [2024-11-19 10:50:48.416342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.972 [2024-11-19 10:50:48.416336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.972 [2024-11-19 10:50:48.416357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.972 [2024-11-19 10:50:48.416365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.972 [2024-11-19 10:50:48.416372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.972 [2024-11-19 10:50:48.416379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.972 [2024-11-19 10:50:48.416387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.972 [2024-11-19 10:50:48.416392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.972 [2024-11-19 10:50:48.416403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:12[2024-11-19 10:50:48.416405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.972 the state(6) to be set 00:23:00.972 [2024-11-19 10:50:48.416419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with [2024-11-19 10:50:48.416419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:00.972 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.972 [2024-11-19 10:50:48.416433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.972 [2024-11-19 10:50:48.416438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.972 [2024-11-19 10:50:48.416446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.972 [2024-11-19 10:50:48.416456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.972 [2024-11-19 10:50:48.416459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.972 [2024-11-19 10:50:48.416472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.972 [2024-11-19 10:50:48.416473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.972 [2024-11-19 10:50:48.416484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.972 [2024-11-19 10:50:48.416488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.972 [2024-11-19 10:50:48.416497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.972 [2024-11-19 10:50:48.416504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.973 [2024-11-19 10:50:48.416509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.973 [2024-11-19 10:50:48.416521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.973 [2024-11-19 10:50:48.416546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.973 [2024-11-19 10:50:48.416559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.973 [2024-11-19 10:50:48.416571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.973 [2024-11-19 10:50:48.416584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.973 [2024-11-19 10:50:48.416629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.973 [2024-11-19 10:50:48.416642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.973 [2024-11-19 10:50:48.416659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.973 [2024-11-19 10:50:48.416672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.973 [2024-11-19 10:50:48.416685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 10:50:48.416698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.973 the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.973 [2024-11-19 10:50:48.416723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.973 [2024-11-19 10:50:48.416736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.973 [2024-11-19 10:50:48.416748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.973 [2024-11-19 10:50:48.416760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.973 [2024-11-19 10:50:48.416786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.973 [2024-11-19 10:50:48.416798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.973 [2024-11-19 10:50:48.416811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.973 [2024-11-19 10:50:48.416823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.973 [2024-11-19 10:50:48.416839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with [2024-11-19 10:50:48.416852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:00.973 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.973 [2024-11-19 10:50:48.416866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.973 [2024-11-19 10:50:48.416879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.973 [2024-11-19 10:50:48.416891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:12[2024-11-19 10:50:48.416903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.973 the state(6) to be set 00:23:00.973 [2024-11-19 10:50:48.416918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with [2024-11-19 10:50:48.416919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:00.973 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.974 [2024-11-19 10:50:48.416932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.974 [2024-11-19 10:50:48.416936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.974 [2024-11-19 10:50:48.416944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.974 [2024-11-19 10:50:48.416951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.974 [2024-11-19 10:50:48.416956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.974 [2024-11-19 10:50:48.416967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:12[2024-11-19 10:50:48.416969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.974 the state(6) to be set 00:23:00.974 [2024-11-19 10:50:48.416982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with [2024-11-19 10:50:48.416983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:00.974 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.974 [2024-11-19 10:50:48.416996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.974 [2024-11-19 10:50:48.417000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.974 [2024-11-19 10:50:48.417015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with [2024-11-19 10:50:48.417016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:00.974 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.974 [2024-11-19 10:50:48.417032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.974 [2024-11-19 10:50:48.417036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.974 [2024-11-19 10:50:48.417045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.974 [2024-11-19 10:50:48.417052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.974 [2024-11-19 10:50:48.417057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.974 [2024-11-19 10:50:48.417068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:12[2024-11-19 10:50:48.417069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.974 the state(6) to be set 00:23:00.974 [2024-11-19 10:50:48.417084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with [2024-11-19 10:50:48.417084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:00.974 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.974 [2024-11-19 10:50:48.417098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.974 [2024-11-19 10:50:48.417102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.974 [2024-11-19 10:50:48.417111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.974 [2024-11-19 10:50:48.417117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.974 [2024-11-19 10:50:48.417123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a010 is same with the state(6) to be set 00:23:00.974 [2024-11-19 10:50:48.417132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.974 [2024-11-19 10:50:48.417148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.974 [2024-11-19 10:50:48.417219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:00.974 [2024-11-19 10:50:48.417647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.974 [2024-11-19 10:50:48.417672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.974 [2024-11-19 10:50:48.417689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.974 [2024-11-19 10:50:48.417707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.974 [2024-11-19 10:50:48.417733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.974 [2024-11-19 10:50:48.417750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.974 [2024-11-19 10:50:48.417765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.974 [2024-11-19 10:50:48.417778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.974 [2024-11-19 10:50:48.417792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73f270 is same with the state(6) to be set 00:23:00.974 [2024-11-19 10:50:48.417854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.974 [2024-11-19 10:50:48.417877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.974 [2024-11-19 10:50:48.417892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.974 [2024-11-19 10:50:48.417907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.974 [2024-11-19 10:50:48.417921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.974 [2024-11-19 10:50:48.417935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.974 [2024-11-19 10:50:48.417950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.974 [2024-11-19 10:50:48.417963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.974 [2024-11-19 10:50:48.417976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb75e0 is same with the state(6) to be set 00:23:00.974 [2024-11-19 10:50:48.418028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.974 [2024-11-19 10:50:48.418049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.974 [2024-11-19 10:50:48.418072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.974 [2024-11-19 10:50:48.418093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.974 [2024-11-19 10:50:48.418109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.974 [2024-11-19 10:50:48.418123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.974 [2024-11-19 10:50:48.418137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.974 [2024-11-19 10:50:48.418151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.974 [2024-11-19 10:50:48.418164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb91710 is same with the state(6) to be set 00:23:00.975 [2024-11-19 10:50:48.418216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.975 [2024-11-19 10:50:48.418246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.975 [2024-11-19 10:50:48.418268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.975 [2024-11-19 10:50:48.418282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.975 [2024-11-19 10:50:48.418297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.975 [2024-11-19 10:50:48.418327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.975 [2024-11-19 10:50:48.418343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.975 [2024-11-19 10:50:48.418370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.975 [2024-11-19 10:50:48.418384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73f6f0 is same with the state(6) to be set 00:23:00.975 [2024-11-19 10:50:48.418433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.975 [2024-11-19 10:50:48.418454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.975 [2024-11-19 10:50:48.418476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.975 [2024-11-19 10:50:48.418496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.975 [2024-11-19 10:50:48.418511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.975 [2024-11-19 10:50:48.418524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.975 [2024-11-19 10:50:48.418538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.975 [2024-11-19 10:50:48.418552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.975 [2024-11-19 10:50:48.418564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73d1d0 is same with the state(6) to be set 00:23:00.975 [2024-11-19 10:50:48.418621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.975 [2024-11-19 10:50:48.418642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.975 [2024-11-19 10:50:48.418657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.975 [2024-11-19 10:50:48.418671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.975 [2024-11-19 10:50:48.418685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.975 [2024-11-19 10:50:48.418698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.975 [2024-11-19 10:50:48.418712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.975 [2024-11-19 10:50:48.418726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.975 [2024-11-19 10:50:48.418739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7110 is same with the state(6) to be set 00:23:00.975 [2024-11-19 10:50:48.418786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.975 [2024-11-19 10:50:48.418807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.975 [2024-11-19 10:50:48.418822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.975 [2024-11-19 10:50:48.418836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.975 [2024-11-19 10:50:48.418852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.975 [2024-11-19 10:50:48.418866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.975 [2024-11-19 10:50:48.418885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.975 [2024-11-19 10:50:48.418899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.975 [2024-11-19 10:50:48.418913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a350 is same with the state(6) to be set 00:23:00.975 [2024-11-19 10:50:48.418951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.975 [2024-11-19 10:50:48.418978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.975 [2024-11-19 10:50:48.418994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.975 [2024-11-19 10:50:48.419008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.975 [2024-11-19 10:50:48.419022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.975 [2024-11-19 10:50:48.419035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.975 [2024-11-19 10:50:48.419049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.975 [2024-11-19 10:50:48.419062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.975 [2024-11-19 10:50:48.419075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb69760 is same with the state(6) to be set 00:23:00.975 [2024-11-19 10:50:48.419123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.975 [2024-11-19 10:50:48.419143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.975 [2024-11-19 10:50:48.419158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.976 [2024-11-19 10:50:48.419172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.419187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.976 [2024-11-19 10:50:48.419200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.419214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.976 [2024-11-19 10:50:48.419228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.419240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736220 is same with the state(6) to be set 00:23:00.976 [2024-11-19 10:50:48.419286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.976 [2024-11-19 10:50:48.419323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.419342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.976 [2024-11-19 10:50:48.419357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.419371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.976 [2024-11-19 10:50:48.419393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.419408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.976 [2024-11-19 10:50:48.419421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.419434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6ddd0 is same with the state(6) to be set 00:23:00.976 [2024-11-19 10:50:48.429365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.976 [2024-11-19 10:50:48.429435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.429471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.976 [2024-11-19 10:50:48.429487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.429504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.976 [2024-11-19 10:50:48.429520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.429539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.976 [2024-11-19 10:50:48.429555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.429571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.976 [2024-11-19 10:50:48.429586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.429609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.976 [2024-11-19 10:50:48.429625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.429642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.976 [2024-11-19 10:50:48.429657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.429674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.976 [2024-11-19 10:50:48.429689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.429707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.976 [2024-11-19 10:50:48.429722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.429739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.976 [2024-11-19 10:50:48.429754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.429771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.976 [2024-11-19 10:50:48.429800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.429818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.976 [2024-11-19 10:50:48.429834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.429851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.976 [2024-11-19 10:50:48.429865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.429882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.976 [2024-11-19 10:50:48.429898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.429914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.976 [2024-11-19 10:50:48.429929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.429946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.976 [2024-11-19 10:50:48.429961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.429979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.976 [2024-11-19 10:50:48.429994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.430010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.976 [2024-11-19 10:50:48.430026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.430042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.976 [2024-11-19 10:50:48.430058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.430075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.976 [2024-11-19 10:50:48.430091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.976 [2024-11-19 10:50:48.430107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.976 [2024-11-19 10:50:48.430122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.977 [2024-11-19 10:50:48.430980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.977 [2024-11-19 10:50:48.430994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.978 [2024-11-19 10:50:48.431012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.978 [2024-11-19 10:50:48.431027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.978 [2024-11-19 10:50:48.431048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.978 [2024-11-19 10:50:48.431063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.978 [2024-11-19 10:50:48.431080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.978 [2024-11-19 10:50:48.431095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.978 [2024-11-19 10:50:48.431112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.978 [2024-11-19 10:50:48.431127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.978 [2024-11-19 10:50:48.431144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.978 [2024-11-19 10:50:48.431159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.978 [2024-11-19 10:50:48.431176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.978 [2024-11-19 10:50:48.431191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.978 [2024-11-19 10:50:48.431209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.978 [2024-11-19 10:50:48.431225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.978 [2024-11-19 10:50:48.431242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.978 [2024-11-19 10:50:48.431257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.978 [2024-11-19 10:50:48.431274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.978 [2024-11-19 10:50:48.431290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.978 [2024-11-19 10:50:48.431314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.978 [2024-11-19 10:50:48.431342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.978 [2024-11-19 10:50:48.431359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.978 [2024-11-19 10:50:48.431374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.978 [2024-11-19 10:50:48.431391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.978 [2024-11-19 10:50:48.431406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.978 [2024-11-19 10:50:48.431422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.978 [2024-11-19 10:50:48.431437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.978 [2024-11-19 10:50:48.431454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.978 [2024-11-19 10:50:48.431473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.978 [2024-11-19 10:50:48.431490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.978 [2024-11-19 10:50:48.431506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.978 [2024-11-19 10:50:48.431522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.978 [2024-11-19 10:50:48.431537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.978 [2024-11-19 10:50:48.431553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.978 [2024-11-19 10:50:48.431568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.978 [2024-11-19 10:50:48.431584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb418d0 is same with the state(6) to be set 00:23:00.978 [2024-11-19 10:50:48.439469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:00.978 [2024-11-19 10:50:48.439559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6a350 (9): Bad file descriptor 00:23:00.978 [2024-11-19 10:50:48.439667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73f270 (9): Bad file descriptor 00:23:00.978 [2024-11-19 10:50:48.439701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb75e0 (9): Bad file descriptor 00:23:00.978 [2024-11-19 10:50:48.439727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb91710 (9): Bad file descriptor 00:23:00.978 [2024-11-19 10:50:48.439753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73f6f0 (9): Bad file descriptor 00:23:00.978 [2024-11-19 10:50:48.439784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73d1d0 (9): Bad file descriptor 00:23:00.978 [2024-11-19 10:50:48.439814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a7110 (9): Bad file descriptor 00:23:00.978 [2024-11-19 10:50:48.439842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb69760 (9): Bad file descriptor 00:23:00.978 [2024-11-19 10:50:48.439873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736220 (9): Bad file descriptor 00:23:00.978 [2024-11-19 10:50:48.439903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6ddd0 (9): Bad file descriptor 00:23:00.978 task offset: 29184 on job bdev=Nvme7n1 fails 00:23:00.978 1814.00 IOPS, 113.38 MiB/s [2024-11-19T09:50:48.601Z] [2024-11-19 10:50:48.442582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:00.978 [2024-11-19 10:50:48.442760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.978 [2024-11-19 10:50:48.442792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb6a350 with addr=10.0.0.2, port=4420 00:23:00.978 [2024-11-19 10:50:48.442812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a350 is same with the state(6) to be set 00:23:00.978 [2024-11-19 10:50:48.443487] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:00.978 [2024-11-19 10:50:48.443583] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:00.978 [2024-11-19 10:50:48.443697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.978 [2024-11-19 10:50:48.443727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a7110 with addr=10.0.0.2, port=4420 00:23:00.978 [2024-11-19 10:50:48.443759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7110 is same with the state(6) to be set 00:23:00.978 [2024-11-19 10:50:48.443782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6a350 (9): Bad file descriptor 00:23:00.978 [2024-11-19 10:50:48.443926] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:00.978 [2024-11-19 10:50:48.443998] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:00.978 [2024-11-19 10:50:48.444068] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:00.978 [2024-11-19 10:50:48.444136] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:00.979 [2024-11-19 10:50:48.444255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.444281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.444318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.444347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.444365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.444381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.444397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.444412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.444429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.444445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.444461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.444476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.444494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.444509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.444526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.444542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.444558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.444574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.444590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.444616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.444633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.444653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.444671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.444686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.444702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.444718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.444734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.444749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.444766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.444781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.444797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.444812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.444828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.444843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.444860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.444875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.444892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.444907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.444924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.444939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.444956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.444971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.444987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.445002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.445019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.445034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.445055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.445071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.445087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.445102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.445119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.445134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.445151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.979 [2024-11-19 10:50:48.445167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.979 [2024-11-19 10:50:48.445183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.445977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.445992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.446009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.446024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.446041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.446056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.446073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.446088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.446105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.980 [2024-11-19 10:50:48.446120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.980 [2024-11-19 10:50:48.446137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.446152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.981 [2024-11-19 10:50:48.446169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.446184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.981 [2024-11-19 10:50:48.446201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.446216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.981 [2024-11-19 10:50:48.446232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.446248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.981 [2024-11-19 10:50:48.446265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.446284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.981 [2024-11-19 10:50:48.446308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.446325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.981 [2024-11-19 10:50:48.446349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.446365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.981 [2024-11-19 10:50:48.446381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.446396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.981 [2024-11-19 10:50:48.446411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb45900 is same with the state(6) to be set 00:23:00.981 [2024-11-19 10:50:48.446547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a7110 (9): Bad file descriptor 00:23:00.981 [2024-11-19 10:50:48.446577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:00.981 [2024-11-19 10:50:48.446604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:00.981 [2024-11-19 10:50:48.446621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:00.981 [2024-11-19 10:50:48.446638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:00.981 [2024-11-19 10:50:48.447890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:00.981 [2024-11-19 10:50:48.447933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:00.981 [2024-11-19 10:50:48.447951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:00.981 [2024-11-19 10:50:48.447965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:00.981 [2024-11-19 10:50:48.447979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:00.981 [2024-11-19 10:50:48.448040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.448063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.981 [2024-11-19 10:50:48.448084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.448101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.981 [2024-11-19 10:50:48.448117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.448133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.981 [2024-11-19 10:50:48.448150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.448165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.981 [2024-11-19 10:50:48.448183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.448203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.981 [2024-11-19 10:50:48.448220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.448236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.981 [2024-11-19 10:50:48.448253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.448268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.981 [2024-11-19 10:50:48.448284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.448299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.981 [2024-11-19 10:50:48.448324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.448340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.981 [2024-11-19 10:50:48.448357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.448372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.981 [2024-11-19 10:50:48.448389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.448404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.981 [2024-11-19 10:50:48.448420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.448436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.981 [2024-11-19 10:50:48.448452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.448468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.981 [2024-11-19 10:50:48.448484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.448500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.981 [2024-11-19 10:50:48.448517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.448532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.981 [2024-11-19 10:50:48.448548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.981 [2024-11-19 10:50:48.448563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.982 [2024-11-19 10:50:48.448580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.982 [2024-11-19 10:50:48.448595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.982 [2024-11-19 10:50:48.448616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.982 [2024-11-19 10:50:48.448631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.982 [2024-11-19 10:50:48.448648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.982 [2024-11-19 10:50:48.448663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.982 [2024-11-19 10:50:48.448680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.982 [2024-11-19 10:50:48.448696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.982 [2024-11-19 10:50:48.448712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.982 [2024-11-19 10:50:48.448727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.982 [2024-11-19 10:50:48.448744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.982 [2024-11-19 10:50:48.448759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.982 [2024-11-19 10:50:48.448777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.982 [2024-11-19 10:50:48.448792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.982 [2024-11-19 10:50:48.448809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.982 [2024-11-19 10:50:48.448823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.982 [2024-11-19 10:50:48.448840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.982 [2024-11-19 10:50:48.448855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.982 [2024-11-19 10:50:48.448872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.982 [2024-11-19 10:50:48.448886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.982 [2024-11-19 10:50:48.448903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.982 [2024-11-19 10:50:48.448918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.982 [2024-11-19 10:50:48.448935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.982 [2024-11-19 10:50:48.448949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.982 [2024-11-19 10:50:48.448966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.982 [2024-11-19 10:50:48.448981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.982 [2024-11-19 10:50:48.448998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.982 [2024-11-19 10:50:48.449017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.982 [2024-11-19 10:50:48.449034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.982 [2024-11-19 10:50:48.449049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.982 [2024-11-19 10:50:48.449065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.982 [2024-11-19 10:50:48.449081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.982 [2024-11-19 10:50:48.449097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.982 [2024-11-19 10:50:48.449113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.982 [2024-11-19 10:50:48.449129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.982 [2024-11-19 10:50:48.449144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.982 [2024-11-19 10:50:48.449161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.982 [2024-11-19 10:50:48.449177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.982 [2024-11-19 10:50:48.449193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.982 [2024-11-19 10:50:48.449209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.982 [2024-11-19 10:50:48.449225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.982 [2024-11-19 10:50:48.449240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.449258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.449273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.449290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.449312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.449330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.449346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.449362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.449377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.449394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.449409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.449434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.449451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.449467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.449482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.449499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.449515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.449532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.449548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.449565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.449580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.449596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.449612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.449628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.449643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.449660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.449674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.449691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.449706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.449723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.449738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.449755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.449770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.449786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.449802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.449818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.449837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.449855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.449870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.449887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.449902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.449919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.449934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.449950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.449966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.449982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.449997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.450014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.450029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.450046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.450060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.450077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.450091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.450108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.983 [2024-11-19 10:50:48.450123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.983 [2024-11-19 10:50:48.450138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe4890 is same with the state(6) to be set 00:23:00.983 [2024-11-19 10:50:48.450398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.983 [2024-11-19 10:50:48.450427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb75e0 with addr=10.0.0.2, port=4420 00:23:00.984 [2024-11-19 10:50:48.450444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb75e0 is same with the state(6) to be set 00:23:00.984 [2024-11-19 10:50:48.450484] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:00.984 [2024-11-19 10:50:48.451995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:00.984 [2024-11-19 10:50:48.452038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb75e0 (9): Bad file descriptor 00:23:00.984 [2024-11-19 10:50:48.452144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.984 [2024-11-19 10:50:48.452919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.984 [2024-11-19 10:50:48.452935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.452950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.452970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.452986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.985 [2024-11-19 10:50:48.453850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.985 [2024-11-19 10:50:48.453865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.453881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.453896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.453913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.453928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.462494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.462553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.462572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.462587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.462605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.462620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.462637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.462653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.462670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.462685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.462703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.462718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.462735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.462750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.462767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.462795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.462813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.462829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.462846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe5b10 is same with the state(6) to be set 00:23:00.986 [2024-11-19 10:50:48.464227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.464251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.464276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.464292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.464320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.464338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.464354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.464371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.464388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.464403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.464420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.464435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.464452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.464466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.464483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.464498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.464515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.464530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.464547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.464562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.464579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.464599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.464617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.464632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.464648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.464664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.464681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.464696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.464712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.464728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.464744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.464760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.464777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.986 [2024-11-19 10:50:48.464792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.986 [2024-11-19 10:50:48.464808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.464823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.464840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.464857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.464874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.464889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.464906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.464921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.464937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.464952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.464967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.464982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.465018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.465049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.465080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.465111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.465141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.465173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.465204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.465235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.465266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.465296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.465339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.465370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.465406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.465439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.465470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.465502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.465533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.465564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.465595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.465625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.465656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.465687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.987 [2024-11-19 10:50:48.465718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.987 [2024-11-19 10:50:48.465734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.465749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.465765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.465780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.465801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.465816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.465833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.465848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.465864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.465879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.465895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.465910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.465927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.465942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.465957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.465972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.465989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.466004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.466020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.466035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.466051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.466067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.466084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.466099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.466114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.466129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.466145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.466160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.466176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.466194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.466211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.466226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.466243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.466258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.466274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.466289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.466312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9437e0 is same with the state(6) to be set 00:23:00.988 [2024-11-19 10:50:48.467574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.467598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.467620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.467636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.467653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.467669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.467685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.467700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.467717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.467733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.467749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.467764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.467780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.467795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.467811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-11-19 10:50:48.467826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.988 [2024-11-19 10:50:48.467842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.467857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.467878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.467894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.467911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.467926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.467942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.467956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.467972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.467988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.468004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.468019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.468035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.468050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.468066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.468081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.468097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.468112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.468128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.468143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.468160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.468176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.468192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.468207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.468223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.468238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.468254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.468273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.468289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.468314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.468333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.468348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.468364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.468381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.468397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.468412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.468429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.468443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.468459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.468474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.468491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.468506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.468522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.468537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.468552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.468567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.468583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.468598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.468614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.468629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.468645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.468661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.468686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-11-19 10:50:48.468702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.989 [2024-11-19 10:50:48.468719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.468734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.468751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.468766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.468782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.468797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.468813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.468829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.468845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.468860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.468876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.468891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.468907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.468922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.468939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.468953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.468969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.468984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.469000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.469015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.469032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.469047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.469063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.469082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.469099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.469114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.469131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.469145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.469162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.469176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.469193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.469207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.469224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.469239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.469255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.469270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.469286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.469307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.469327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.469353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.469369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.469384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.469400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.469415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.469432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.469448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.469464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.469479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.469502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.469518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.469535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.469550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.469566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.469581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.469597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.469612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.469629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.990 [2024-11-19 10:50:48.469643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.990 [2024-11-19 10:50:48.469658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944b80 is same with the state(6) to be set 00:23:00.990 [2024-11-19 10:50:48.470900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.470924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.470946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.470962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.470979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.470994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.991 [2024-11-19 10:50:48.471850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.991 [2024-11-19 10:50:48.471864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.471880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.471895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.471911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.471926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.471947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.471962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.471978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.471993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.472009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.472024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.472040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.472055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.472071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.472085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.472102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.472117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.472134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.472149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.472166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.472181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.472197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.472212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.472228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.472243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.472260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.472275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.472292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.472313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.472332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.472352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.472368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.472383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.472399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.472413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.472430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.472446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.472462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.472476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.472492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.472507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.472524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.472538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.472555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.472569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.472586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.472600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.472617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.472632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.472648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.472663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.472679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.992 [2024-11-19 10:50:48.472694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.992 [2024-11-19 10:50:48.472710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.472725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.472745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.472761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.472777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.472793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.472809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.472823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.472839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.472854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.472870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.472884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.472900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.472916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.472932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.472947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.472962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb404b0 is same with the state(6) to be set 00:23:00.993 [2024-11-19 10:50:48.474220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.474243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.474264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.474280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.474296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.474321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.474338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.474354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.474370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.474385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.474408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.474424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.474441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.474457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.474473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.474488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.474504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.474519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.474535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.474551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.474567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.474582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.474597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.474614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.474630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.474644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.474661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.474676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.474692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.474706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.474723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.474738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.474754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.474769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.474785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.474804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.474821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.474836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.993 [2024-11-19 10:50:48.474852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.993 [2024-11-19 10:50:48.474867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.994 [2024-11-19 10:50:48.474883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.994 [2024-11-19 10:50:48.474899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.994 [2024-11-19 10:50:48.474915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.994 [2024-11-19 10:50:48.474929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.994 [2024-11-19 10:50:48.474945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.994 [2024-11-19 10:50:48.474960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.994 [2024-11-19 10:50:48.474976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.994 [2024-11-19 10:50:48.474991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.994 [2024-11-19 10:50:48.475008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.994 [2024-11-19 10:50:48.475022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.994 [2024-11-19 10:50:48.475038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.994 [2024-11-19 10:50:48.475053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.994 [2024-11-19 10:50:48.475069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.994 [2024-11-19 10:50:48.475084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.994 [2024-11-19 10:50:48.475100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.994 [2024-11-19 10:50:48.475115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.994 [2024-11-19 10:50:48.475131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.994 [2024-11-19 10:50:48.475147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.994 [2024-11-19 10:50:48.475164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.994 [2024-11-19 10:50:48.475179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.994 [2024-11-19 10:50:48.475199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.994 [2024-11-19 10:50:48.475215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.994 [2024-11-19 10:50:48.475232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.994 [2024-11-19 10:50:48.475247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.994 [2024-11-19 10:50:48.475263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.994 [2024-11-19 10:50:48.475277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.994 [2024-11-19 10:50:48.475294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.994 [2024-11-19 10:50:48.475319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.994 [2024-11-19 10:50:48.475336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.994 [2024-11-19 10:50:48.475352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.994 [2024-11-19 10:50:48.475369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.994 [2024-11-19 10:50:48.475384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.994 [2024-11-19 10:50:48.475401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.994 [2024-11-19 10:50:48.475416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.994 [2024-11-19 10:50:48.475432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.994 [2024-11-19 10:50:48.475448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.994 [2024-11-19 10:50:48.475464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.994 [2024-11-19 10:50:48.475479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.994 [2024-11-19 10:50:48.475495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.994 [2024-11-19 10:50:48.475510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.994 [2024-11-19 10:50:48.475527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.994 [2024-11-19 10:50:48.475543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.994 [2024-11-19 10:50:48.475559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.994 [2024-11-19 10:50:48.475574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.994 [2024-11-19 10:50:48.475590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.994 [2024-11-19 10:50:48.475609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.475626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.475641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.475657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.475672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.475689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.475704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.475720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.475734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.475750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.475766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.475782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.475797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.475813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.475828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.475845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.475860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.475876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.475891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.475907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.475922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.475938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.475953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.475970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.475984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.476000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.476019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.476036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.476052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.476069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.476084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.476101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.476116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.476133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.476147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.476164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.476179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.476195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.476210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.476226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.476241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.476257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.476272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.476287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb443d0 is same with the state(6) to be set 00:23:00.995 [2024-11-19 10:50:48.477558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.477582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.477606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.477622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.477639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.477654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.477671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.477691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.477708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.477723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.477740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.995 [2024-11-19 10:50:48.477756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-19 10:50:48.477772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.477787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.477803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.477828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.477845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.477860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.477876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.477891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.477907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.477922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.477938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.477953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.477968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.477983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.477999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.478014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.478030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.478045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.478060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.478075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.478095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.478110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.478127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.478141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.478157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.478173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.478189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.478204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.478220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.478235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.478250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.478265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.478281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.478296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.478320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.478336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.478353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.478368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.478385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.478399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.478416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.478431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.478448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.478462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.478478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.478502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.478519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.478534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.478550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.478565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.478582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.478596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.478613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.478628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.478644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.996 [2024-11-19 10:50:48.478659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-19 10:50:48.478675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.478690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.478706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.478721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.478737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.478752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.478769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.478784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.478800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.478815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.478831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.478847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.478863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.478878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.478898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.478914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.478931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.478947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.478963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.478978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.478994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.479009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.479027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.479042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.479059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.479074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.479090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.479105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.479121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.479136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.479152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.479167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.479184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.479199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.479215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.479230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.479246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.479260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.479277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.479296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.479323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.479350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.479366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.479381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.479398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.479414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.479431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.479446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.479462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.479478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.479495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.479510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.479526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.479541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.479557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.997 [2024-11-19 10:50:48.479572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-19 10:50:48.479589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-19 10:50:48.479603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-19 10:50:48.479620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-19 10:50:48.479635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-19 10:50:48.479650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46e20 is same with the state(6) to be set 00:23:00.998 [2024-11-19 10:50:48.481313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:00.998 [2024-11-19 10:50:48.481346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:00.998 [2024-11-19 10:50:48.481365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:00.998 [2024-11-19 10:50:48.481595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.998 [2024-11-19 10:50:48.481631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73f6f0 with addr=10.0.0.2, port=4420 00:23:00.998 [2024-11-19 10:50:48.481649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73f6f0 is same with the state(6) to be set 00:23:00.998 [2024-11-19 10:50:48.481668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:00.998 [2024-11-19 10:50:48.481683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:00.998 [2024-11-19 10:50:48.481700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:00.998 [2024-11-19 10:50:48.481717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:00.998 [2024-11-19 10:50:48.481784] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:00.998 [2024-11-19 10:50:48.481810] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:00.998 [2024-11-19 10:50:48.481838] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:00.998 [2024-11-19 10:50:48.481859] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:23:00.998 [2024-11-19 10:50:48.481886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73f6f0 (9): Bad file descriptor 00:23:00.998 [2024-11-19 10:50:48.482279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:00.998 [2024-11-19 10:50:48.482316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:00.998 00:23:00.998 Latency(us) 00:23:00.998 [2024-11-19T09:50:48.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.998 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.998 Job: Nvme1n1 ended in about 1.05 seconds with error 00:23:00.998 Verification LBA range: start 0x0 length 0x400 00:23:00.998 Nvme1n1 : 1.05 183.08 11.44 61.03 0.00 259578.88 20971.52 259425.47 00:23:00.998 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.998 Job: Nvme2n1 ended in about 1.06 seconds with error 00:23:00.998 Verification LBA range: start 0x0 length 0x400 00:23:00.998 Nvme2n1 : 1.06 184.72 11.55 60.32 0.00 254086.62 9709.04 257872.02 00:23:00.998 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.998 Job: Nvme3n1 ended in about 1.06 seconds with error 00:23:00.998 Verification LBA range: start 0x0 length 0x400 00:23:00.998 Nvme3n1 : 1.06 185.07 11.57 60.12 0.00 249244.74 31263.10 237677.23 00:23:00.998 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.998 Job: Nvme4n1 ended in about 1.07 seconds with error 00:23:00.998 Verification LBA range: start 0x0 length 0x400 00:23:00.998 Nvme4n1 : 1.07 187.30 11.71 59.94 0.00 242786.06 14369.37 237677.23 00:23:00.998 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.998 Job: Nvme5n1 ended in about 1.07 seconds with error 00:23:00.998 Verification LBA range: start 0x0 length 0x400 00:23:00.998 Nvme5n1 : 1.07 179.26 11.20 59.75 0.00 246566.68 23884.23 256318.58 00:23:00.998 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.998 Job: Nvme6n1 ended in about 1.04 seconds with error 00:23:00.998 Verification LBA range: start 0x0 length 0x400 00:23:00.998 Nvme6n1 : 1.04 184.87 11.55 61.62 0.00 233853.91 21554.06 259425.47 00:23:00.998 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.998 Job: Nvme7n1 ended in about 1.03 seconds with error 00:23:00.998 Verification LBA range: start 0x0 length 0x400 00:23:00.998 Nvme7n1 : 1.03 186.13 11.63 62.04 0.00 227555.18 18641.35 256318.58 00:23:00.998 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.998 Job: Nvme8n1 ended in about 1.07 seconds with error 00:23:00.998 Verification LBA range: start 0x0 length 0x400 00:23:00.998 Nvme8n1 : 1.07 178.70 11.17 59.57 0.00 233812.57 18447.17 239230.67 00:23:00.998 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.998 Job: Nvme9n1 ended in about 1.04 seconds with error 00:23:00.998 Verification LBA range: start 0x0 length 0x400 00:23:00.998 Nvme9n1 : 1.04 127.28 7.96 61.25 0.00 288569.34 21554.06 276513.37 00:23:00.998 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.998 Job: Nvme10n1 ended in about 1.08 seconds with error 00:23:00.998 Verification LBA range: start 0x0 length 0x400 00:23:00.998 Nvme10n1 : 1.08 118.76 7.42 59.38 0.00 301197.08 21748.24 288940.94 00:23:00.998 [2024-11-19T09:50:48.621Z] =================================================================================================================== 00:23:00.998 [2024-11-19T09:50:48.621Z] Total : 1715.19 107.20 605.03 0.00 251605.11 9709.04 288940.94 00:23:00.998 [2024-11-19 10:50:48.509412] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:00.998 [2024-11-19 10:50:48.509504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:00.998 [2024-11-19 10:50:48.509539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:00.998 [2024-11-19 10:50:48.509849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.998 [2024-11-19 10:50:48.509887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736220 with addr=10.0.0.2, port=4420 00:23:00.998 [2024-11-19 10:50:48.509909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736220 is same with the state(6) to be set 00:23:00.998 [2024-11-19 10:50:48.509998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.998 [2024-11-19 10:50:48.510025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73d1d0 with addr=10.0.0.2, port=4420 00:23:00.998 [2024-11-19 10:50:48.510042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73d1d0 is same with the state(6) to be set 00:23:00.998 [2024-11-19 10:50:48.510129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.998 [2024-11-19 10:50:48.510156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb69760 with addr=10.0.0.2, port=4420 00:23:00.999 [2024-11-19 10:50:48.510173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb69760 is same with the state(6) to be set 00:23:00.999 [2024-11-19 10:50:48.511887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:00.999 [2024-11-19 10:50:48.511919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:00.999 [2024-11-19 10:50:48.512088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.999 [2024-11-19 10:50:48.512116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb6ddd0 with addr=10.0.0.2, port=4420 00:23:00.999 [2024-11-19 10:50:48.512133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6ddd0 is same with the state(6) to be set 00:23:00.999 [2024-11-19 10:50:48.512221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.999 [2024-11-19 10:50:48.512249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73f270 with addr=10.0.0.2, port=4420 00:23:00.999 [2024-11-19 10:50:48.512266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73f270 is same with the state(6) to be set 00:23:00.999 [2024-11-19 10:50:48.512385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.999 [2024-11-19 10:50:48.512413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb91710 with addr=10.0.0.2, port=4420 00:23:00.999 [2024-11-19 10:50:48.512439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb91710 is same with the state(6) to be set 00:23:00.999 [2024-11-19 10:50:48.512529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.999 [2024-11-19 10:50:48.512557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb6a350 with addr=10.0.0.2, port=4420 00:23:00.999 [2024-11-19 10:50:48.512574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a350 is same with the state(6) to be set 00:23:00.999 [2024-11-19 10:50:48.512599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736220 (9): Bad file descriptor 00:23:00.999 [2024-11-19 10:50:48.512624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73d1d0 (9): Bad file descriptor 00:23:00.999 [2024-11-19 10:50:48.512643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb69760 (9): Bad file descriptor 00:23:00.999 [2024-11-19 10:50:48.512662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:00.999 [2024-11-19 10:50:48.512677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:00.999 [2024-11-19 10:50:48.512694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:00.999 [2024-11-19 10:50:48.512711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:00.999 [2024-11-19 10:50:48.512774] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:23:00.999 [2024-11-19 10:50:48.512799] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:00.999 [2024-11-19 10:50:48.512822] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:23:00.999 [2024-11-19 10:50:48.513010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.999 [2024-11-19 10:50:48.513038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a7110 with addr=10.0.0.2, port=4420 00:23:00.999 [2024-11-19 10:50:48.513055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7110 is same with the state(6) to be set 00:23:00.999 [2024-11-19 10:50:48.513140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.999 [2024-11-19 10:50:48.513167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb75e0 with addr=10.0.0.2, port=4420 00:23:00.999 [2024-11-19 10:50:48.513196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb75e0 is same with the state(6) to be set 00:23:00.999 [2024-11-19 10:50:48.513215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6ddd0 (9): Bad file descriptor 00:23:00.999 [2024-11-19 10:50:48.513237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73f270 (9): Bad file descriptor 00:23:00.999 [2024-11-19 10:50:48.513256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb91710 (9): Bad file descriptor 00:23:00.999 [2024-11-19 10:50:48.513276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6a350 (9): Bad file descriptor 00:23:00.999 [2024-11-19 10:50:48.513293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:00.999 [2024-11-19 10:50:48.513318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:00.999 [2024-11-19 10:50:48.513334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:00.999 [2024-11-19 10:50:48.513348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:00.999 [2024-11-19 10:50:48.513369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:00.999 [2024-11-19 10:50:48.513383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:00.999 [2024-11-19 10:50:48.513397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:00.999 [2024-11-19 10:50:48.513411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:00.999 [2024-11-19 10:50:48.513425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:00.999 [2024-11-19 10:50:48.513438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:00.999 [2024-11-19 10:50:48.513452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:00.999 [2024-11-19 10:50:48.513465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:00.999 [2024-11-19 10:50:48.513569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:00.999 [2024-11-19 10:50:48.513606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a7110 (9): Bad file descriptor 00:23:00.999 [2024-11-19 10:50:48.513630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb75e0 (9): Bad file descriptor 00:23:00.999 [2024-11-19 10:50:48.513647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:00.999 [2024-11-19 10:50:48.513661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:00.999 [2024-11-19 10:50:48.513675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:00.999 [2024-11-19 10:50:48.513689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:00.999 [2024-11-19 10:50:48.513704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:00.999 [2024-11-19 10:50:48.513718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:00.999 [2024-11-19 10:50:48.513731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:00.999 [2024-11-19 10:50:48.513744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:01.000 [2024-11-19 10:50:48.513759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:01.000 [2024-11-19 10:50:48.513772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:01.000 [2024-11-19 10:50:48.513785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:01.000 [2024-11-19 10:50:48.513798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:01.000 [2024-11-19 10:50:48.513814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:01.000 [2024-11-19 10:50:48.513827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:01.000 [2024-11-19 10:50:48.513841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:01.000 [2024-11-19 10:50:48.513855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:01.000 [2024-11-19 10:50:48.513977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.000 [2024-11-19 10:50:48.514004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73f6f0 with addr=10.0.0.2, port=4420 00:23:01.000 [2024-11-19 10:50:48.514025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73f6f0 is same with the state(6) to be set 00:23:01.000 [2024-11-19 10:50:48.514042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:01.000 [2024-11-19 10:50:48.514056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:01.000 [2024-11-19 10:50:48.514070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:01.000 [2024-11-19 10:50:48.514085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:01.000 [2024-11-19 10:50:48.514101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:01.000 [2024-11-19 10:50:48.514115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:01.000 [2024-11-19 10:50:48.514129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:01.000 [2024-11-19 10:50:48.514143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:01.000 [2024-11-19 10:50:48.514185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73f6f0 (9): Bad file descriptor 00:23:01.000 [2024-11-19 10:50:48.514230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:01.000 [2024-11-19 10:50:48.514249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:01.000 [2024-11-19 10:50:48.514263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:01.000 [2024-11-19 10:50:48.514278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:01.566 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:02.501 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1391619 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1391619 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1391619 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:02.502 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:02.502 rmmod nvme_tcp 00:23:02.502 rmmod nvme_fabrics 00:23:02.502 rmmod nvme_keyring 00:23:02.502 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:02.502 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:02.502 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:02.502 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1391441 ']' 00:23:02.502 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1391441 00:23:02.502 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1391441 ']' 00:23:02.502 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1391441 00:23:02.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1391441) - No such process 00:23:02.502 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1391441 is not found' 00:23:02.502 Process with pid 1391441 is not found 00:23:02.502 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:02.502 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:02.502 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:02.502 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:02.502 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:02.502 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:02.502 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:02.502 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:02.502 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:02.502 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.502 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.502 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.039 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:05.039 00:23:05.039 real 0m7.549s 00:23:05.039 user 0m18.729s 00:23:05.039 sys 0m1.495s 00:23:05.039 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:05.039 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:05.039 ************************************ 00:23:05.039 END TEST nvmf_shutdown_tc3 00:23:05.039 ************************************ 00:23:05.039 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:05.039 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:05.039 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:05.039 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:05.039 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:05.039 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:05.039 ************************************ 00:23:05.039 START TEST nvmf_shutdown_tc4 00:23:05.039 ************************************ 00:23:05.039 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:23:05.039 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:05.039 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:05.039 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:05.040 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:05.040 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:05.040 Found net devices under 0000:09:00.0: cvl_0_0 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:05.040 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:05.041 Found net devices under 0000:09:00.1: cvl_0_1 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:05.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:23:05.041 00:23:05.041 --- 10.0.0.2 ping statistics --- 00:23:05.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.041 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:05.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:23:05.041 00:23:05.041 --- 10.0.0.1 ping statistics --- 00:23:05.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.041 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1392524 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1392524 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1392524 ']' 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.041 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.041 [2024-11-19 10:50:52.362392] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:23:05.041 [2024-11-19 10:50:52.362468] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.041 [2024-11-19 10:50:52.436474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:05.041 [2024-11-19 10:50:52.497328] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.041 [2024-11-19 10:50:52.497379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.041 [2024-11-19 10:50:52.497393] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.041 [2024-11-19 10:50:52.497404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.041 [2024-11-19 10:50:52.497415] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.041 [2024-11-19 10:50:52.498959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.042 [2024-11-19 10:50:52.499023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:05.042 [2024-11-19 10:50:52.499092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:05.042 [2024-11-19 10:50:52.499095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.042 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:05.042 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:23:05.042 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:05.042 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:05.042 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.042 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.042 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:05.042 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.042 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.042 [2024-11-19 10:50:52.644320] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.042 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.042 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:05.042 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:05.042 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.042 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.042 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:05.042 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.042 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.300 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.300 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.300 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.300 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.300 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.300 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.300 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.300 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.300 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.300 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.300 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.300 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.300 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.300 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.300 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.300 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.300 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.300 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.300 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:05.300 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.300 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.300 Malloc1 00:23:05.300 [2024-11-19 10:50:52.731481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.300 Malloc2 00:23:05.300 Malloc3 00:23:05.300 Malloc4 00:23:05.300 Malloc5 00:23:05.558 Malloc6 00:23:05.558 Malloc7 00:23:05.558 Malloc8 00:23:05.558 Malloc9 00:23:05.558 Malloc10 00:23:05.558 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.558 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:05.558 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:05.558 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.815 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1392644 00:23:05.816 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:05.816 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:05.816 [2024-11-19 10:50:53.242559] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:11.086 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:11.087 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1392524 00:23:11.087 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1392524 ']' 00:23:11.087 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1392524 00:23:11.087 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:23:11.087 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:11.087 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1392524 00:23:11.087 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:11.087 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:11.087 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1392524' 00:23:11.087 killing process with pid 1392524 00:23:11.087 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1392524 00:23:11.087 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1392524 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 [2024-11-19 10:50:58.229813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.087 starting I/O failed: -6 00:23:11.087 starting I/O failed: -6 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 [2024-11-19 10:50:58.231038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.087 starting I/O failed: -6 00:23:11.087 starting I/O failed: -6 00:23:11.087 starting I/O failed: -6 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 starting I/O failed: -6 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.087 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 [2024-11-19 10:50:58.232431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 [2024-11-19 10:50:58.234349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.088 NVMe io qpair process completion error 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 Write completed with error (sct=0, sc=8) 00:23:11.088 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 [2024-11-19 10:50:58.235481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 [2024-11-19 10:50:58.236529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 [2024-11-19 10:50:58.237678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.089 Write completed with error (sct=0, sc=8) 00:23:11.089 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 [2024-11-19 10:50:58.239906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.090 NVMe io qpair process completion error 00:23:11.090 [2024-11-19 10:50:58.240920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5cd5f0 is same with the state(6) to be set 00:23:11.090 [2024-11-19 10:50:58.240960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5cd5f0 is same with the state(6) to be set 00:23:11.090 [2024-11-19 10:50:58.240986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5cd5f0 is same with the state(6) to be set 00:23:11.090 [2024-11-19 10:50:58.240999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5cd5f0 is same with the state(6) to be set 00:23:11.090 [2024-11-19 10:50:58.241573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c290 is same with the state(6) to be set 00:23:11.090 [2024-11-19 10:50:58.241616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c290 is same with the state(6) to be set 00:23:11.090 [2024-11-19 10:50:58.241631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c290 is same with the state(6) to be set 00:23:11.090 [2024-11-19 10:50:58.241644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c290 is same with the state(6) to be set 00:23:11.090 [2024-11-19 10:50:58.241658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c290 is same with the state(6) to be set 00:23:11.090 [2024-11-19 10:50:58.241670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c290 is same with the state(6) to be set 00:23:11.090 [2024-11-19 10:50:58.241682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c290 is same with the state(6) to be set 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 [2024-11-19 10:50:58.242958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.090 starting I/O failed: -6 00:23:11.090 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 [2024-11-19 10:50:58.244127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.091 NVMe io qpair process completion error 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 [2024-11-19 10:50:58.245420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 [2024-11-19 10:50:58.246477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.091 starting I/O failed: -6 00:23:11.091 Write completed with error (sct=0, sc=8) 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 [2024-11-19 10:50:58.247613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.092 Write completed with error (sct=0, sc=8) 00:23:11.092 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 [2024-11-19 10:50:58.249148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.093 NVMe io qpair process completion error 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 [2024-11-19 10:50:58.250584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 [2024-11-19 10:50:58.251726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.093 starting I/O failed: -6 00:23:11.093 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 [2024-11-19 10:50:58.253250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 [2024-11-19 10:50:58.254984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.094 NVMe io qpair process completion error 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 starting I/O failed: -6 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.094 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 [2024-11-19 10:50:58.256405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 [2024-11-19 10:50:58.257501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.095 Write completed with error (sct=0, sc=8) 00:23:11.095 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 [2024-11-19 10:50:58.258650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 [2024-11-19 10:50:58.261866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.096 NVMe io qpair process completion error 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 starting I/O failed: -6 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.096 Write completed with error (sct=0, sc=8) 00:23:11.097 [2024-11-19 10:50:58.263246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 [2024-11-19 10:50:58.264248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 [2024-11-19 10:50:58.265447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.097 starting I/O failed: -6 00:23:11.097 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 [2024-11-19 10:50:58.268379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.098 NVMe io qpair process completion error 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 [2024-11-19 10:50:58.269791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.098 starting I/O failed: -6 00:23:11.098 starting I/O failed: -6 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 Write completed with error (sct=0, sc=8) 00:23:11.098 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 [2024-11-19 10:50:58.270908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 [2024-11-19 10:50:58.272117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.099 starting I/O failed: -6 00:23:11.099 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 [2024-11-19 10:50:58.274005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.100 NVMe io qpair process completion error 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 [2024-11-19 10:50:58.275422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 [2024-11-19 10:50:58.276385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 starting I/O failed: -6 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.100 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 [2024-11-19 10:50:58.277584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.101 starting I/O failed: -6 00:23:11.101 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 [2024-11-19 10:50:58.279519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.102 NVMe io qpair process completion error 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 [2024-11-19 10:50:58.280912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 [2024-11-19 10:50:58.282002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 Write completed with error (sct=0, sc=8) 00:23:11.102 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 [2024-11-19 10:50:58.283145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 starting I/O failed: -6 00:23:11.103 [2024-11-19 10:50:58.286793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.103 NVMe io qpair process completion error 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.103 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Write completed with error (sct=0, sc=8) 00:23:11.104 Initializing NVMe Controllers 00:23:11.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:11.104 Controller IO queue size 128, less than required. 00:23:11.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:11.104 Controller IO queue size 128, less than required. 00:23:11.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:11.104 Controller IO queue size 128, less than required. 00:23:11.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:11.104 Controller IO queue size 128, less than required. 00:23:11.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:11.104 Controller IO queue size 128, less than required. 00:23:11.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:11.104 Controller IO queue size 128, less than required. 00:23:11.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:11.104 Controller IO queue size 128, less than required. 00:23:11.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:11.104 Controller IO queue size 128, less than required. 00:23:11.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:11.104 Controller IO queue size 128, less than required. 00:23:11.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:11.104 Controller IO queue size 128, less than required. 00:23:11.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:11.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:11.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:11.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:11.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:11.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:11.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:11.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:11.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:11.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:11.104 Initialization complete. Launching workers. 00:23:11.104 ======================================================== 00:23:11.104 Latency(us) 00:23:11.104 Device Information : IOPS MiB/s Average min max 00:23:11.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1787.19 76.79 72168.67 722.24 138780.74 00:23:11.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1754.87 75.40 72962.63 984.60 128485.99 00:23:11.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1752.11 75.29 73102.54 877.06 126394.68 00:23:11.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1814.40 77.96 70618.52 1093.61 124406.06 00:23:11.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1822.70 78.32 70341.56 996.08 130476.67 00:23:11.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1824.40 78.39 70316.69 753.56 120865.67 00:23:11.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1805.69 77.59 71069.61 958.61 137439.29 00:23:11.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1814.40 77.96 70752.79 1012.73 140089.61 00:23:11.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1812.49 77.88 70010.24 769.57 121360.11 00:23:11.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1799.73 77.33 70528.42 1028.27 121412.76 00:23:11.105 ======================================================== 00:23:11.105 Total : 17987.98 772.92 71173.02 722.24 140089.61 00:23:11.105 00:23:11.105 [2024-11-19 10:50:58.295696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf35f0 is same with the state(6) to be set 00:23:11.105 [2024-11-19 10:50:58.295807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf4900 is same with the state(6) to be set 00:23:11.105 [2024-11-19 10:50:58.295868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf2d10 is same with the state(6) to be set 00:23:11.105 [2024-11-19 10:50:58.295927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf3c50 is same with the state(6) to be set 00:23:11.105 [2024-11-19 10:50:58.295992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf26b0 is same with the state(6) to be set 00:23:11.105 [2024-11-19 10:50:58.296050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf29e0 is same with the state(6) to be set 00:23:11.105 [2024-11-19 10:50:58.296107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf3920 is same with the state(6) to be set 00:23:11.105 [2024-11-19 10:50:58.296164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf32c0 is same with the state(6) to be set 00:23:11.105 [2024-11-19 10:50:58.296221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf4720 is same with the state(6) to be set 00:23:11.105 [2024-11-19 10:50:58.296290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf4ae0 is same with the state(6) to be set 00:23:11.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:11.365 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1392644 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1392644 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1392644 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:12.305 rmmod nvme_tcp 00:23:12.305 rmmod nvme_fabrics 00:23:12.305 rmmod nvme_keyring 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1392524 ']' 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1392524 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1392524 ']' 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1392524 00:23:12.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1392524) - No such process 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1392524 is not found' 00:23:12.305 Process with pid 1392524 is not found 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.305 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.843 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:14.843 00:23:14.843 real 0m9.728s 00:23:14.844 user 0m24.087s 00:23:14.844 sys 0m5.520s 00:23:14.844 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:14.844 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:14.844 ************************************ 00:23:14.844 END TEST nvmf_shutdown_tc4 00:23:14.844 ************************************ 00:23:14.844 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:14.844 00:23:14.844 real 0m37.550s 00:23:14.844 user 1m41.766s 00:23:14.844 sys 0m12.104s 00:23:14.844 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:14.844 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:14.844 ************************************ 00:23:14.844 END TEST nvmf_shutdown 00:23:14.844 ************************************ 00:23:14.844 10:51:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:14.844 10:51:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:14.844 10:51:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:14.844 10:51:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:14.844 ************************************ 00:23:14.844 START TEST nvmf_nsid 00:23:14.844 ************************************ 00:23:14.844 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:14.844 * Looking for test storage... 00:23:14.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:14.844 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:14.844 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:23:14.844 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:14.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.844 --rc genhtml_branch_coverage=1 00:23:14.844 --rc genhtml_function_coverage=1 00:23:14.844 --rc genhtml_legend=1 00:23:14.844 --rc geninfo_all_blocks=1 00:23:14.844 --rc geninfo_unexecuted_blocks=1 00:23:14.844 00:23:14.844 ' 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:14.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.844 --rc genhtml_branch_coverage=1 00:23:14.844 --rc genhtml_function_coverage=1 00:23:14.844 --rc genhtml_legend=1 00:23:14.844 --rc geninfo_all_blocks=1 00:23:14.844 --rc geninfo_unexecuted_blocks=1 00:23:14.844 00:23:14.844 ' 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:14.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.844 --rc genhtml_branch_coverage=1 00:23:14.844 --rc genhtml_function_coverage=1 00:23:14.844 --rc genhtml_legend=1 00:23:14.844 --rc geninfo_all_blocks=1 00:23:14.844 --rc geninfo_unexecuted_blocks=1 00:23:14.844 00:23:14.844 ' 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:14.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.844 --rc genhtml_branch_coverage=1 00:23:14.844 --rc genhtml_function_coverage=1 00:23:14.844 --rc genhtml_legend=1 00:23:14.844 --rc geninfo_all_blocks=1 00:23:14.844 --rc geninfo_unexecuted_blocks=1 00:23:14.844 00:23:14.844 ' 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.844 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:14.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:14.845 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:16.751 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:16.751 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:16.751 Found net devices under 0000:09:00.0: cvl_0_0 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:16.751 Found net devices under 0000:09:00.1: cvl_0_1 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.751 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.752 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:16.752 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:16.752 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.752 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.752 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:16.752 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:16.752 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.752 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:16.752 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:16.752 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:16.752 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:16.752 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:17.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:23:17.015 00:23:17.015 --- 10.0.0.2 ping statistics --- 00:23:17.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.015 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:23:17.015 00:23:17.015 --- 10.0.0.1 ping statistics --- 00:23:17.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.015 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1395440 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1395440 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1395440 ']' 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.015 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:17.015 [2024-11-19 10:51:04.481672] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:23:17.015 [2024-11-19 10:51:04.481768] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.015 [2024-11-19 10:51:04.556648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.015 [2024-11-19 10:51:04.617881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.015 [2024-11-19 10:51:04.617933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.015 [2024-11-19 10:51:04.617956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.015 [2024-11-19 10:51:04.617968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.015 [2024-11-19 10:51:04.617978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.015 [2024-11-19 10:51:04.618601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1395579 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=90f996cd-ec7c-4806-bb6a-876ccd26a892 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=d3b6506b-d345-4d5d-8746-4d58531dc0e2 00:23:17.313 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:17.314 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=19aaac31-6d4c-4be2-8783-872e62627f38 00:23:17.314 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:17.314 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.314 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:17.314 null0 00:23:17.314 null1 00:23:17.314 null2 00:23:17.314 [2024-11-19 10:51:04.791847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.314 [2024-11-19 10:51:04.806667] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:23:17.314 [2024-11-19 10:51:04.806756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1395579 ] 00:23:17.314 [2024-11-19 10:51:04.816030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.314 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.314 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1395579 /var/tmp/tgt2.sock 00:23:17.314 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1395579 ']' 00:23:17.314 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:17.314 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.314 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:17.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:17.314 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.314 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:17.314 [2024-11-19 10:51:04.873966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.619 [2024-11-19 10:51:04.935668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.619 10:51:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:17.619 10:51:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:17.619 10:51:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:18.185 [2024-11-19 10:51:05.589499] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.185 [2024-11-19 10:51:05.605702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:18.185 nvme0n1 nvme0n2 00:23:18.185 nvme1n1 00:23:18.185 10:51:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:18.185 10:51:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:18.185 10:51:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:18.752 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:18.752 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:18.752 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:18.752 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:18.752 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:23:18.752 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:18.752 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:18.752 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:18.752 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:18.752 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:18.752 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:23:18.752 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:23:18.752 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:23:19.685 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:19.685 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:19.685 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 90f996cd-ec7c-4806-bb6a-876ccd26a892 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=90f996cdec7c4806bb6a876ccd26a892 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 90F996CDEC7C4806BB6A876CCD26A892 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 90F996CDEC7C4806BB6A876CCD26A892 == \9\0\F\9\9\6\C\D\E\C\7\C\4\8\0\6\B\B\6\A\8\7\6\C\C\D\2\6\A\8\9\2 ]] 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid d3b6506b-d345-4d5d-8746-4d58531dc0e2 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:19.686 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d3b6506bd3454d5d87464d58531dc0e2 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D3B6506BD3454D5D87464D58531DC0E2 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ D3B6506BD3454D5D87464D58531DC0E2 == \D\3\B\6\5\0\6\B\D\3\4\5\4\D\5\D\8\7\4\6\4\D\5\8\5\3\1\D\C\0\E\2 ]] 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 19aaac31-6d4c-4be2-8783-872e62627f38 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=19aaac316d4c4be28783872e62627f38 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 19AAAC316D4C4BE28783872E62627F38 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 19AAAC316D4C4BE28783872E62627F38 == \1\9\A\A\A\C\3\1\6\D\4\C\4\B\E\2\8\7\8\3\8\7\2\E\6\2\6\2\7\F\3\8 ]] 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1395579 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1395579 ']' 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1395579 00:23:19.945 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:20.203 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.203 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1395579 00:23:20.203 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:20.203 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:20.203 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1395579' 00:23:20.203 killing process with pid 1395579 00:23:20.203 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1395579 00:23:20.203 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1395579 00:23:20.461 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:20.461 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:20.461 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:20.461 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:20.461 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:20.461 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:20.461 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:20.461 rmmod nvme_tcp 00:23:20.461 rmmod nvme_fabrics 00:23:20.461 rmmod nvme_keyring 00:23:20.461 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:20.461 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:20.461 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:20.461 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1395440 ']' 00:23:20.720 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1395440 00:23:20.720 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1395440 ']' 00:23:20.720 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1395440 00:23:20.720 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:20.720 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.720 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1395440 00:23:20.720 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:20.720 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:20.720 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1395440' 00:23:20.720 killing process with pid 1395440 00:23:20.720 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1395440 00:23:20.720 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1395440 00:23:20.720 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:20.720 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:20.720 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:20.720 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:20.979 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:20.979 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:20.979 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:20.979 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:20.979 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:20.979 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.979 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.979 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.879 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:22.879 00:23:22.879 real 0m8.453s 00:23:22.879 user 0m8.230s 00:23:22.879 sys 0m2.720s 00:23:22.879 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:22.879 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:22.879 ************************************ 00:23:22.879 END TEST nvmf_nsid 00:23:22.879 ************************************ 00:23:22.879 10:51:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:22.879 00:23:22.879 real 11m43.327s 00:23:22.879 user 27m55.608s 00:23:22.879 sys 2m45.937s 00:23:22.879 10:51:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:22.879 10:51:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:22.879 ************************************ 00:23:22.879 END TEST nvmf_target_extra 00:23:22.879 ************************************ 00:23:22.879 10:51:10 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:22.879 10:51:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:22.879 10:51:10 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:22.879 10:51:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:22.879 ************************************ 00:23:22.879 START TEST nvmf_host 00:23:22.879 ************************************ 00:23:22.879 10:51:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:23.168 * Looking for test storage... 00:23:23.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:23.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.168 --rc genhtml_branch_coverage=1 00:23:23.168 --rc genhtml_function_coverage=1 00:23:23.168 --rc genhtml_legend=1 00:23:23.168 --rc geninfo_all_blocks=1 00:23:23.168 --rc geninfo_unexecuted_blocks=1 00:23:23.168 00:23:23.168 ' 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:23.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.168 --rc genhtml_branch_coverage=1 00:23:23.168 --rc genhtml_function_coverage=1 00:23:23.168 --rc genhtml_legend=1 00:23:23.168 --rc geninfo_all_blocks=1 00:23:23.168 --rc geninfo_unexecuted_blocks=1 00:23:23.168 00:23:23.168 ' 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:23.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.168 --rc genhtml_branch_coverage=1 00:23:23.168 --rc genhtml_function_coverage=1 00:23:23.168 --rc genhtml_legend=1 00:23:23.168 --rc geninfo_all_blocks=1 00:23:23.168 --rc geninfo_unexecuted_blocks=1 00:23:23.168 00:23:23.168 ' 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:23.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.168 --rc genhtml_branch_coverage=1 00:23:23.168 --rc genhtml_function_coverage=1 00:23:23.168 --rc genhtml_legend=1 00:23:23.168 --rc geninfo_all_blocks=1 00:23:23.168 --rc geninfo_unexecuted_blocks=1 00:23:23.168 00:23:23.168 ' 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.168 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:23.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.169 ************************************ 00:23:23.169 START TEST nvmf_multicontroller 00:23:23.169 ************************************ 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:23.169 * Looking for test storage... 00:23:23.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:23:23.169 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:23.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.428 --rc genhtml_branch_coverage=1 00:23:23.428 --rc genhtml_function_coverage=1 00:23:23.428 --rc genhtml_legend=1 00:23:23.428 --rc geninfo_all_blocks=1 00:23:23.428 --rc geninfo_unexecuted_blocks=1 00:23:23.428 00:23:23.428 ' 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:23.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.428 --rc genhtml_branch_coverage=1 00:23:23.428 --rc genhtml_function_coverage=1 00:23:23.428 --rc genhtml_legend=1 00:23:23.428 --rc geninfo_all_blocks=1 00:23:23.428 --rc geninfo_unexecuted_blocks=1 00:23:23.428 00:23:23.428 ' 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:23.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.428 --rc genhtml_branch_coverage=1 00:23:23.428 --rc genhtml_function_coverage=1 00:23:23.428 --rc genhtml_legend=1 00:23:23.428 --rc geninfo_all_blocks=1 00:23:23.428 --rc geninfo_unexecuted_blocks=1 00:23:23.428 00:23:23.428 ' 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:23.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.428 --rc genhtml_branch_coverage=1 00:23:23.428 --rc genhtml_function_coverage=1 00:23:23.428 --rc genhtml_legend=1 00:23:23.428 --rc geninfo_all_blocks=1 00:23:23.428 --rc geninfo_unexecuted_blocks=1 00:23:23.428 00:23:23.428 ' 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.428 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:23.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:23.429 10:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:25.328 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:25.329 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:25.329 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:25.329 Found net devices under 0000:09:00.0: cvl_0_0 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.329 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:25.589 Found net devices under 0000:09:00.1: cvl_0_1 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.589 10:51:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:25.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:23:25.589 00:23:25.589 --- 10.0.0.2 ping statistics --- 00:23:25.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.589 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:23:25.589 00:23:25.589 --- 10.0.0.1 ping statistics --- 00:23:25.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.589 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1398528 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1398528 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1398528 ']' 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.589 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.590 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.590 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.590 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.590 [2024-11-19 10:51:13.170126] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:23:25.590 [2024-11-19 10:51:13.170238] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.849 [2024-11-19 10:51:13.241644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:25.849 [2024-11-19 10:51:13.300174] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.849 [2024-11-19 10:51:13.300231] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.849 [2024-11-19 10:51:13.300254] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.849 [2024-11-19 10:51:13.300265] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.849 [2024-11-19 10:51:13.300273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.849 [2024-11-19 10:51:13.301759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.849 [2024-11-19 10:51:13.301856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.849 [2024-11-19 10:51:13.301859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.849 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.849 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:25.849 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:25.849 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:25.849 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.849 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.849 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:25.849 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.849 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.849 [2024-11-19 10:51:13.439860] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.849 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.849 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:25.849 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.849 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.108 Malloc0 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.108 [2024-11-19 10:51:13.497200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.108 [2024-11-19 10:51:13.505107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.108 Malloc1 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1398559 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:26.108 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1398559 /var/tmp/bdevperf.sock 00:23:26.109 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1398559 ']' 00:23:26.109 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.109 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.109 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.109 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.109 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.367 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.367 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:26.367 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:26.367 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.367 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.367 NVMe0n1 00:23:26.367 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.367 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:26.367 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:26.367 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.367 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.367 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.367 1 00:23:26.367 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:26.367 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:26.367 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:26.367 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:26.367 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.367 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:26.367 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.367 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:26.367 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.367 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.626 request: 00:23:26.626 { 00:23:26.626 "name": "NVMe0", 00:23:26.626 "trtype": "tcp", 00:23:26.626 "traddr": "10.0.0.2", 00:23:26.626 "adrfam": "ipv4", 00:23:26.626 "trsvcid": "4420", 00:23:26.626 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.626 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:26.626 "hostaddr": "10.0.0.1", 00:23:26.626 "prchk_reftag": false, 00:23:26.626 "prchk_guard": false, 00:23:26.626 "hdgst": false, 00:23:26.626 "ddgst": false, 00:23:26.626 "allow_unrecognized_csi": false, 00:23:26.626 "method": "bdev_nvme_attach_controller", 00:23:26.626 "req_id": 1 00:23:26.626 } 00:23:26.626 Got JSON-RPC error response 00:23:26.626 response: 00:23:26.626 { 00:23:26.626 "code": -114, 00:23:26.626 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:26.626 } 00:23:26.626 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:26.626 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:26.626 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:26.626 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:26.626 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:26.626 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:26.626 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:26.626 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:26.626 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:26.626 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.626 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:26.626 10:51:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.626 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:26.626 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.626 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.626 request: 00:23:26.626 { 00:23:26.626 "name": "NVMe0", 00:23:26.626 "trtype": "tcp", 00:23:26.626 "traddr": "10.0.0.2", 00:23:26.626 "adrfam": "ipv4", 00:23:26.626 "trsvcid": "4420", 00:23:26.626 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:26.626 "hostaddr": "10.0.0.1", 00:23:26.626 "prchk_reftag": false, 00:23:26.626 "prchk_guard": false, 00:23:26.626 "hdgst": false, 00:23:26.626 "ddgst": false, 00:23:26.626 "allow_unrecognized_csi": false, 00:23:26.626 "method": "bdev_nvme_attach_controller", 00:23:26.626 "req_id": 1 00:23:26.626 } 00:23:26.626 Got JSON-RPC error response 00:23:26.626 response: 00:23:26.626 { 00:23:26.626 "code": -114, 00:23:26.626 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:26.626 } 00:23:26.626 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:26.626 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:26.626 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:26.626 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:26.626 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:26.626 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:26.626 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.627 request: 00:23:26.627 { 00:23:26.627 "name": "NVMe0", 00:23:26.627 "trtype": "tcp", 00:23:26.627 "traddr": "10.0.0.2", 00:23:26.627 "adrfam": "ipv4", 00:23:26.627 "trsvcid": "4420", 00:23:26.627 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.627 "hostaddr": "10.0.0.1", 00:23:26.627 "prchk_reftag": false, 00:23:26.627 "prchk_guard": false, 00:23:26.627 "hdgst": false, 00:23:26.627 "ddgst": false, 00:23:26.627 "multipath": "disable", 00:23:26.627 "allow_unrecognized_csi": false, 00:23:26.627 "method": "bdev_nvme_attach_controller", 00:23:26.627 "req_id": 1 00:23:26.627 } 00:23:26.627 Got JSON-RPC error response 00:23:26.627 response: 00:23:26.627 { 00:23:26.627 "code": -114, 00:23:26.627 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:26.627 } 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.627 request: 00:23:26.627 { 00:23:26.627 "name": "NVMe0", 00:23:26.627 "trtype": "tcp", 00:23:26.627 "traddr": "10.0.0.2", 00:23:26.627 "adrfam": "ipv4", 00:23:26.627 "trsvcid": "4420", 00:23:26.627 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.627 "hostaddr": "10.0.0.1", 00:23:26.627 "prchk_reftag": false, 00:23:26.627 "prchk_guard": false, 00:23:26.627 "hdgst": false, 00:23:26.627 "ddgst": false, 00:23:26.627 "multipath": "failover", 00:23:26.627 "allow_unrecognized_csi": false, 00:23:26.627 "method": "bdev_nvme_attach_controller", 00:23:26.627 "req_id": 1 00:23:26.627 } 00:23:26.627 Got JSON-RPC error response 00:23:26.627 response: 00:23:26.627 { 00:23:26.627 "code": -114, 00:23:26.627 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:26.627 } 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.627 NVMe0n1 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.627 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:26.627 10:51:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:28.003 { 00:23:28.003 "results": [ 00:23:28.003 { 00:23:28.003 "job": "NVMe0n1", 00:23:28.003 "core_mask": "0x1", 00:23:28.003 "workload": "write", 00:23:28.003 "status": "finished", 00:23:28.003 "queue_depth": 128, 00:23:28.003 "io_size": 4096, 00:23:28.003 "runtime": 1.010427, 00:23:28.003 "iops": 17849.879308450785, 00:23:28.003 "mibps": 69.72609104863588, 00:23:28.003 "io_failed": 0, 00:23:28.003 "io_timeout": 0, 00:23:28.003 "avg_latency_us": 7158.93276820844, 00:23:28.003 "min_latency_us": 2560.7585185185185, 00:23:28.003 "max_latency_us": 17184.995555555557 00:23:28.003 } 00:23:28.003 ], 00:23:28.003 "core_count": 1 00:23:28.003 } 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1398559 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1398559 ']' 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1398559 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1398559 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1398559' 00:23:28.003 killing process with pid 1398559 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1398559 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1398559 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:28.003 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:23:28.004 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:23:28.004 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:28.004 [2024-11-19 10:51:13.606200] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:23:28.004 [2024-11-19 10:51:13.606336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1398559 ] 00:23:28.004 [2024-11-19 10:51:13.675701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.004 [2024-11-19 10:51:13.736683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.004 [2024-11-19 10:51:14.184576] bdev.c:4686:bdev_name_add: *ERROR*: Bdev name de097113-9c3c-4803-aa2d-afc344aacbe1 already exists 00:23:28.004 [2024-11-19 10:51:14.184630] bdev.c:7824:bdev_register: *ERROR*: Unable to add uuid:de097113-9c3c-4803-aa2d-afc344aacbe1 alias for bdev NVMe1n1 00:23:28.004 [2024-11-19 10:51:14.184645] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:28.004 Running I/O for 1 seconds... 00:23:28.004 17781.00 IOPS, 69.46 MiB/s 00:23:28.004 Latency(us) 00:23:28.004 [2024-11-19T09:51:15.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.004 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:28.004 NVMe0n1 : 1.01 17849.88 69.73 0.00 0.00 7158.93 2560.76 17185.00 00:23:28.004 [2024-11-19T09:51:15.627Z] =================================================================================================================== 00:23:28.004 [2024-11-19T09:51:15.627Z] Total : 17849.88 69.73 0.00 0.00 7158.93 2560.76 17185.00 00:23:28.004 Received shutdown signal, test time was about 1.000000 seconds 00:23:28.004 00:23:28.004 Latency(us) 00:23:28.004 [2024-11-19T09:51:15.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.004 [2024-11-19T09:51:15.627Z] =================================================================================================================== 00:23:28.004 [2024-11-19T09:51:15.627Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.004 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:28.004 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:28.004 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:28.004 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:28.004 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:28.004 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:28.004 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:28.004 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:28.004 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:28.004 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:28.004 rmmod nvme_tcp 00:23:28.262 rmmod nvme_fabrics 00:23:28.262 rmmod nvme_keyring 00:23:28.262 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:28.262 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:28.262 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:28.262 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1398528 ']' 00:23:28.262 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1398528 00:23:28.262 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1398528 ']' 00:23:28.262 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1398528 00:23:28.262 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:28.262 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.262 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1398528 00:23:28.262 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:28.262 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:28.262 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1398528' 00:23:28.262 killing process with pid 1398528 00:23:28.262 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1398528 00:23:28.262 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1398528 00:23:28.521 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:28.521 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:28.521 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:28.521 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:28.521 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:28.521 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:28.521 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:28.521 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:28.521 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:28.521 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.521 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.521 10:51:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.427 10:51:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:30.427 00:23:30.427 real 0m7.360s 00:23:30.427 user 0m10.909s 00:23:30.427 sys 0m2.411s 00:23:30.427 10:51:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:30.427 10:51:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.427 ************************************ 00:23:30.427 END TEST nvmf_multicontroller 00:23:30.427 ************************************ 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.687 ************************************ 00:23:30.687 START TEST nvmf_aer 00:23:30.687 ************************************ 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:30.687 * Looking for test storage... 00:23:30.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:30.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.687 --rc genhtml_branch_coverage=1 00:23:30.687 --rc genhtml_function_coverage=1 00:23:30.687 --rc genhtml_legend=1 00:23:30.687 --rc geninfo_all_blocks=1 00:23:30.687 --rc geninfo_unexecuted_blocks=1 00:23:30.687 00:23:30.687 ' 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:30.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.687 --rc genhtml_branch_coverage=1 00:23:30.687 --rc genhtml_function_coverage=1 00:23:30.687 --rc genhtml_legend=1 00:23:30.687 --rc geninfo_all_blocks=1 00:23:30.687 --rc geninfo_unexecuted_blocks=1 00:23:30.687 00:23:30.687 ' 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:30.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.687 --rc genhtml_branch_coverage=1 00:23:30.687 --rc genhtml_function_coverage=1 00:23:30.687 --rc genhtml_legend=1 00:23:30.687 --rc geninfo_all_blocks=1 00:23:30.687 --rc geninfo_unexecuted_blocks=1 00:23:30.687 00:23:30.687 ' 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:30.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.687 --rc genhtml_branch_coverage=1 00:23:30.687 --rc genhtml_function_coverage=1 00:23:30.687 --rc genhtml_legend=1 00:23:30.687 --rc geninfo_all_blocks=1 00:23:30.687 --rc geninfo_unexecuted_blocks=1 00:23:30.687 00:23:30.687 ' 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:30.687 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:30.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:30.688 10:51:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:33.221 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:33.221 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:33.221 Found net devices under 0000:09:00.0: cvl_0_0 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:33.221 Found net devices under 0000:09:00.1: cvl_0_1 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:33.221 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:33.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:23:33.222 00:23:33.222 --- 10.0.0.2 ping statistics --- 00:23:33.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.222 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:33.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:23:33.222 00:23:33.222 --- 10.0.0.1 ping statistics --- 00:23:33.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.222 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1400893 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1400893 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1400893 ']' 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.222 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.222 [2024-11-19 10:51:20.698958] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:23:33.222 [2024-11-19 10:51:20.699028] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.222 [2024-11-19 10:51:20.768531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:33.222 [2024-11-19 10:51:20.824824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.222 [2024-11-19 10:51:20.824872] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.222 [2024-11-19 10:51:20.824894] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.222 [2024-11-19 10:51:20.824904] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.222 [2024-11-19 10:51:20.824914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.222 [2024-11-19 10:51:20.826433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.222 [2024-11-19 10:51:20.826569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.222 [2024-11-19 10:51:20.826643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:33.222 [2024-11-19 10:51:20.826646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.480 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.480 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:33.480 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:33.480 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:33.480 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.480 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.480 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:33.480 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.480 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.480 [2024-11-19 10:51:20.979710] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.480 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.480 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:33.480 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.480 10:51:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.480 Malloc0 00:23:33.480 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.480 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:33.480 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.480 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.480 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.481 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:33.481 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.481 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.481 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.481 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:33.481 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.481 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.481 [2024-11-19 10:51:21.046512] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.481 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.481 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:33.481 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.481 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.481 [ 00:23:33.481 { 00:23:33.481 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:33.481 "subtype": "Discovery", 00:23:33.481 "listen_addresses": [], 00:23:33.481 "allow_any_host": true, 00:23:33.481 "hosts": [] 00:23:33.481 }, 00:23:33.481 { 00:23:33.481 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.481 "subtype": "NVMe", 00:23:33.481 "listen_addresses": [ 00:23:33.481 { 00:23:33.481 "trtype": "TCP", 00:23:33.481 "adrfam": "IPv4", 00:23:33.481 "traddr": "10.0.0.2", 00:23:33.481 "trsvcid": "4420" 00:23:33.481 } 00:23:33.481 ], 00:23:33.481 "allow_any_host": true, 00:23:33.481 "hosts": [], 00:23:33.481 "serial_number": "SPDK00000000000001", 00:23:33.481 "model_number": "SPDK bdev Controller", 00:23:33.481 "max_namespaces": 2, 00:23:33.481 "min_cntlid": 1, 00:23:33.481 "max_cntlid": 65519, 00:23:33.481 "namespaces": [ 00:23:33.481 { 00:23:33.481 "nsid": 1, 00:23:33.481 "bdev_name": "Malloc0", 00:23:33.481 "name": "Malloc0", 00:23:33.481 "nguid": "77B96B5F4AFB43B389FF1668312FA331", 00:23:33.481 "uuid": "77b96b5f-4afb-43b3-89ff-1668312fa331" 00:23:33.481 } 00:23:33.481 ] 00:23:33.481 } 00:23:33.481 ] 00:23:33.481 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.481 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:33.481 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:33.481 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1400918 00:23:33.481 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:33.481 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:33.481 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:33.481 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:33.481 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:33.481 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:33.481 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:33.739 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:33.739 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:33.739 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:33.739 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:33.739 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:33.739 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:33.739 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:33.739 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:33.739 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.739 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.739 Malloc1 00:23:33.739 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.739 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:33.739 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.739 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.739 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.739 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:33.739 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.739 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.739 [ 00:23:33.739 { 00:23:33.739 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:33.739 "subtype": "Discovery", 00:23:33.739 "listen_addresses": [], 00:23:33.739 "allow_any_host": true, 00:23:33.739 "hosts": [] 00:23:33.740 }, 00:23:33.740 { 00:23:33.740 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.740 "subtype": "NVMe", 00:23:33.740 "listen_addresses": [ 00:23:33.740 { 00:23:33.740 "trtype": "TCP", 00:23:33.740 "adrfam": "IPv4", 00:23:33.740 "traddr": "10.0.0.2", 00:23:33.740 "trsvcid": "4420" 00:23:33.740 } 00:23:33.740 ], 00:23:33.740 "allow_any_host": true, 00:23:33.740 "hosts": [], 00:23:33.740 "serial_number": "SPDK00000000000001", 00:23:33.740 "model_number": "SPDK bdev Controller", 00:23:33.740 "max_namespaces": 2, 00:23:33.740 "min_cntlid": 1, 00:23:33.740 "max_cntlid": 65519, 00:23:33.740 "namespaces": [ 00:23:33.740 { 00:23:33.740 "nsid": 1, 00:23:33.740 "bdev_name": "Malloc0", 00:23:33.740 "name": "Malloc0", 00:23:33.740 "nguid": "77B96B5F4AFB43B389FF1668312FA331", 00:23:33.740 "uuid": "77b96b5f-4afb-43b3-89ff-1668312fa331" 00:23:33.740 }, 00:23:33.740 { 00:23:33.740 "nsid": 2, 00:23:33.740 "bdev_name": "Malloc1", 00:23:33.740 "name": "Malloc1", 00:23:33.740 "nguid": "D7870E69686A43B6A96BC8A73CB80DA5", 00:23:33.740 "uuid": "d7870e69-686a-43b6-a96b-c8a73cb80da5" 00:23:33.740 } 00:23:33.740 ] 00:23:33.740 } 00:23:33.740 ] 00:23:33.740 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.740 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1400918 00:23:33.740 Asynchronous Event Request test 00:23:33.740 Attaching to 10.0.0.2 00:23:33.740 Attached to 10.0.0.2 00:23:33.740 Registering asynchronous event callbacks... 00:23:33.740 Starting namespace attribute notice tests for all controllers... 00:23:33.740 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:33.740 aer_cb - Changed Namespace 00:23:33.740 Cleaning up... 00:23:33.740 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:33.740 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.740 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:33.998 rmmod nvme_tcp 00:23:33.998 rmmod nvme_fabrics 00:23:33.998 rmmod nvme_keyring 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1400893 ']' 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1400893 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1400893 ']' 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1400893 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1400893 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1400893' 00:23:33.998 killing process with pid 1400893 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1400893 00:23:33.998 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1400893 00:23:34.257 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:34.257 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:34.257 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:34.257 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:34.257 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:34.257 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:34.257 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:34.257 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:34.257 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:34.257 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.257 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.257 10:51:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:36.795 00:23:36.795 real 0m5.724s 00:23:36.795 user 0m4.548s 00:23:36.795 sys 0m2.104s 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:36.795 ************************************ 00:23:36.795 END TEST nvmf_aer 00:23:36.795 ************************************ 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.795 ************************************ 00:23:36.795 START TEST nvmf_async_init 00:23:36.795 ************************************ 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:36.795 * Looking for test storage... 00:23:36.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:36.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.795 --rc genhtml_branch_coverage=1 00:23:36.795 --rc genhtml_function_coverage=1 00:23:36.795 --rc genhtml_legend=1 00:23:36.795 --rc geninfo_all_blocks=1 00:23:36.795 --rc geninfo_unexecuted_blocks=1 00:23:36.795 00:23:36.795 ' 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:36.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.795 --rc genhtml_branch_coverage=1 00:23:36.795 --rc genhtml_function_coverage=1 00:23:36.795 --rc genhtml_legend=1 00:23:36.795 --rc geninfo_all_blocks=1 00:23:36.795 --rc geninfo_unexecuted_blocks=1 00:23:36.795 00:23:36.795 ' 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:36.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.795 --rc genhtml_branch_coverage=1 00:23:36.795 --rc genhtml_function_coverage=1 00:23:36.795 --rc genhtml_legend=1 00:23:36.795 --rc geninfo_all_blocks=1 00:23:36.795 --rc geninfo_unexecuted_blocks=1 00:23:36.795 00:23:36.795 ' 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:36.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.795 --rc genhtml_branch_coverage=1 00:23:36.795 --rc genhtml_function_coverage=1 00:23:36.795 --rc genhtml_legend=1 00:23:36.795 --rc geninfo_all_blocks=1 00:23:36.795 --rc geninfo_unexecuted_blocks=1 00:23:36.795 00:23:36.795 ' 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:36.795 10:51:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:36.795 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.795 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.795 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.795 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.795 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.795 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.795 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.795 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.795 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.795 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.795 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:36.795 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:36.795 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.795 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.795 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:36.795 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.795 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:36.795 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:36.795 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.795 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.795 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:36.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=5a21602c9fe9436fa98b1dc3911eeab5 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:36.796 10:51:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:38.699 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:38.699 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:38.699 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:38.699 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:38.699 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:38.699 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:38.699 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:38.699 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:38.699 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:38.699 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:38.699 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:38.699 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:38.699 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:38.699 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:38.699 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:38.699 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.699 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.699 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.699 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:38.700 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:38.700 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:38.700 Found net devices under 0000:09:00.0: cvl_0_0 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:38.700 Found net devices under 0000:09:00.1: cvl_0_1 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:38.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:23:38.700 00:23:38.700 --- 10.0.0.2 ping statistics --- 00:23:38.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.700 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:23:38.700 00:23:38.700 --- 10.0.0.1 ping statistics --- 00:23:38.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.700 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.700 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:38.701 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:38.701 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.701 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:38.701 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:38.701 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:38.701 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:38.701 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:38.701 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:38.701 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1402885 00:23:38.701 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:38.701 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1402885 00:23:38.701 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1402885 ']' 00:23:38.701 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.701 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.701 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.701 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.701 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:38.959 [2024-11-19 10:51:26.323394] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:23:38.959 [2024-11-19 10:51:26.323469] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.959 [2024-11-19 10:51:26.397150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.959 [2024-11-19 10:51:26.451529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.959 [2024-11-19 10:51:26.451586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.959 [2024-11-19 10:51:26.451608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.959 [2024-11-19 10:51:26.451620] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.959 [2024-11-19 10:51:26.451629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.959 [2024-11-19 10:51:26.452204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.959 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.959 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:38.959 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:38.959 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:38.959 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.217 [2024-11-19 10:51:26.591759] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.217 null0 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5a21602c9fe9436fa98b1dc3911eeab5 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.217 [2024-11-19 10:51:26.632005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.217 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.476 nvme0n1 00:23:39.476 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.476 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:39.476 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.476 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.476 [ 00:23:39.476 { 00:23:39.476 "name": "nvme0n1", 00:23:39.476 "aliases": [ 00:23:39.476 "5a21602c-9fe9-436f-a98b-1dc3911eeab5" 00:23:39.476 ], 00:23:39.476 "product_name": "NVMe disk", 00:23:39.476 "block_size": 512, 00:23:39.476 "num_blocks": 2097152, 00:23:39.476 "uuid": "5a21602c-9fe9-436f-a98b-1dc3911eeab5", 00:23:39.476 "numa_id": 0, 00:23:39.476 "assigned_rate_limits": { 00:23:39.476 "rw_ios_per_sec": 0, 00:23:39.476 "rw_mbytes_per_sec": 0, 00:23:39.476 "r_mbytes_per_sec": 0, 00:23:39.476 "w_mbytes_per_sec": 0 00:23:39.476 }, 00:23:39.476 "claimed": false, 00:23:39.476 "zoned": false, 00:23:39.476 "supported_io_types": { 00:23:39.476 "read": true, 00:23:39.476 "write": true, 00:23:39.476 "unmap": false, 00:23:39.476 "flush": true, 00:23:39.476 "reset": true, 00:23:39.476 "nvme_admin": true, 00:23:39.476 "nvme_io": true, 00:23:39.476 "nvme_io_md": false, 00:23:39.476 "write_zeroes": true, 00:23:39.476 "zcopy": false, 00:23:39.476 "get_zone_info": false, 00:23:39.476 "zone_management": false, 00:23:39.476 "zone_append": false, 00:23:39.476 "compare": true, 00:23:39.476 "compare_and_write": true, 00:23:39.476 "abort": true, 00:23:39.476 "seek_hole": false, 00:23:39.476 "seek_data": false, 00:23:39.476 "copy": true, 00:23:39.476 "nvme_iov_md": false 00:23:39.476 }, 00:23:39.476 "memory_domains": [ 00:23:39.476 { 00:23:39.476 "dma_device_id": "system", 00:23:39.476 "dma_device_type": 1 00:23:39.476 } 00:23:39.476 ], 00:23:39.476 "driver_specific": { 00:23:39.476 "nvme": [ 00:23:39.476 { 00:23:39.476 "trid": { 00:23:39.476 "trtype": "TCP", 00:23:39.476 "adrfam": "IPv4", 00:23:39.476 "traddr": "10.0.0.2", 00:23:39.476 "trsvcid": "4420", 00:23:39.476 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:39.476 }, 00:23:39.476 "ctrlr_data": { 00:23:39.477 "cntlid": 1, 00:23:39.477 "vendor_id": "0x8086", 00:23:39.477 "model_number": "SPDK bdev Controller", 00:23:39.477 "serial_number": "00000000000000000000", 00:23:39.477 "firmware_revision": "25.01", 00:23:39.477 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:39.477 "oacs": { 00:23:39.477 "security": 0, 00:23:39.477 "format": 0, 00:23:39.477 "firmware": 0, 00:23:39.477 "ns_manage": 0 00:23:39.477 }, 00:23:39.477 "multi_ctrlr": true, 00:23:39.477 "ana_reporting": false 00:23:39.477 }, 00:23:39.477 "vs": { 00:23:39.477 "nvme_version": "1.3" 00:23:39.477 }, 00:23:39.477 "ns_data": { 00:23:39.477 "id": 1, 00:23:39.477 "can_share": true 00:23:39.477 } 00:23:39.477 } 00:23:39.477 ], 00:23:39.477 "mp_policy": "active_passive" 00:23:39.477 } 00:23:39.477 } 00:23:39.477 ] 00:23:39.477 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.477 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:39.477 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.477 10:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.477 [2024-11-19 10:51:26.885017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:39.477 [2024-11-19 10:51:26.885091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0fda0 (9): Bad file descriptor 00:23:39.477 [2024-11-19 10:51:27.017470] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.477 [ 00:23:39.477 { 00:23:39.477 "name": "nvme0n1", 00:23:39.477 "aliases": [ 00:23:39.477 "5a21602c-9fe9-436f-a98b-1dc3911eeab5" 00:23:39.477 ], 00:23:39.477 "product_name": "NVMe disk", 00:23:39.477 "block_size": 512, 00:23:39.477 "num_blocks": 2097152, 00:23:39.477 "uuid": "5a21602c-9fe9-436f-a98b-1dc3911eeab5", 00:23:39.477 "numa_id": 0, 00:23:39.477 "assigned_rate_limits": { 00:23:39.477 "rw_ios_per_sec": 0, 00:23:39.477 "rw_mbytes_per_sec": 0, 00:23:39.477 "r_mbytes_per_sec": 0, 00:23:39.477 "w_mbytes_per_sec": 0 00:23:39.477 }, 00:23:39.477 "claimed": false, 00:23:39.477 "zoned": false, 00:23:39.477 "supported_io_types": { 00:23:39.477 "read": true, 00:23:39.477 "write": true, 00:23:39.477 "unmap": false, 00:23:39.477 "flush": true, 00:23:39.477 "reset": true, 00:23:39.477 "nvme_admin": true, 00:23:39.477 "nvme_io": true, 00:23:39.477 "nvme_io_md": false, 00:23:39.477 "write_zeroes": true, 00:23:39.477 "zcopy": false, 00:23:39.477 "get_zone_info": false, 00:23:39.477 "zone_management": false, 00:23:39.477 "zone_append": false, 00:23:39.477 "compare": true, 00:23:39.477 "compare_and_write": true, 00:23:39.477 "abort": true, 00:23:39.477 "seek_hole": false, 00:23:39.477 "seek_data": false, 00:23:39.477 "copy": true, 00:23:39.477 "nvme_iov_md": false 00:23:39.477 }, 00:23:39.477 "memory_domains": [ 00:23:39.477 { 00:23:39.477 "dma_device_id": "system", 00:23:39.477 "dma_device_type": 1 00:23:39.477 } 00:23:39.477 ], 00:23:39.477 "driver_specific": { 00:23:39.477 "nvme": [ 00:23:39.477 { 00:23:39.477 "trid": { 00:23:39.477 "trtype": "TCP", 00:23:39.477 "adrfam": "IPv4", 00:23:39.477 "traddr": "10.0.0.2", 00:23:39.477 "trsvcid": "4420", 00:23:39.477 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:39.477 }, 00:23:39.477 "ctrlr_data": { 00:23:39.477 "cntlid": 2, 00:23:39.477 "vendor_id": "0x8086", 00:23:39.477 "model_number": "SPDK bdev Controller", 00:23:39.477 "serial_number": "00000000000000000000", 00:23:39.477 "firmware_revision": "25.01", 00:23:39.477 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:39.477 "oacs": { 00:23:39.477 "security": 0, 00:23:39.477 "format": 0, 00:23:39.477 "firmware": 0, 00:23:39.477 "ns_manage": 0 00:23:39.477 }, 00:23:39.477 "multi_ctrlr": true, 00:23:39.477 "ana_reporting": false 00:23:39.477 }, 00:23:39.477 "vs": { 00:23:39.477 "nvme_version": "1.3" 00:23:39.477 }, 00:23:39.477 "ns_data": { 00:23:39.477 "id": 1, 00:23:39.477 "can_share": true 00:23:39.477 } 00:23:39.477 } 00:23:39.477 ], 00:23:39.477 "mp_policy": "active_passive" 00:23:39.477 } 00:23:39.477 } 00:23:39.477 ] 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.wDHHpezjlM 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.wDHHpezjlM 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.wDHHpezjlM 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.477 [2024-11-19 10:51:27.081710] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:39.477 [2024-11-19 10:51:27.081833] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.477 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.477 [2024-11-19 10:51:27.097771] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.735 nvme0n1 00:23:39.735 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.735 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:39.735 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.735 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.735 [ 00:23:39.735 { 00:23:39.735 "name": "nvme0n1", 00:23:39.735 "aliases": [ 00:23:39.735 "5a21602c-9fe9-436f-a98b-1dc3911eeab5" 00:23:39.735 ], 00:23:39.735 "product_name": "NVMe disk", 00:23:39.735 "block_size": 512, 00:23:39.735 "num_blocks": 2097152, 00:23:39.735 "uuid": "5a21602c-9fe9-436f-a98b-1dc3911eeab5", 00:23:39.735 "numa_id": 0, 00:23:39.735 "assigned_rate_limits": { 00:23:39.735 "rw_ios_per_sec": 0, 00:23:39.735 "rw_mbytes_per_sec": 0, 00:23:39.735 "r_mbytes_per_sec": 0, 00:23:39.735 "w_mbytes_per_sec": 0 00:23:39.735 }, 00:23:39.735 "claimed": false, 00:23:39.735 "zoned": false, 00:23:39.735 "supported_io_types": { 00:23:39.735 "read": true, 00:23:39.735 "write": true, 00:23:39.735 "unmap": false, 00:23:39.735 "flush": true, 00:23:39.735 "reset": true, 00:23:39.735 "nvme_admin": true, 00:23:39.735 "nvme_io": true, 00:23:39.735 "nvme_io_md": false, 00:23:39.735 "write_zeroes": true, 00:23:39.735 "zcopy": false, 00:23:39.735 "get_zone_info": false, 00:23:39.735 "zone_management": false, 00:23:39.735 "zone_append": false, 00:23:39.735 "compare": true, 00:23:39.735 "compare_and_write": true, 00:23:39.735 "abort": true, 00:23:39.735 "seek_hole": false, 00:23:39.735 "seek_data": false, 00:23:39.735 "copy": true, 00:23:39.735 "nvme_iov_md": false 00:23:39.736 }, 00:23:39.736 "memory_domains": [ 00:23:39.736 { 00:23:39.736 "dma_device_id": "system", 00:23:39.736 "dma_device_type": 1 00:23:39.736 } 00:23:39.736 ], 00:23:39.736 "driver_specific": { 00:23:39.736 "nvme": [ 00:23:39.736 { 00:23:39.736 "trid": { 00:23:39.736 "trtype": "TCP", 00:23:39.736 "adrfam": "IPv4", 00:23:39.736 "traddr": "10.0.0.2", 00:23:39.736 "trsvcid": "4421", 00:23:39.736 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:39.736 }, 00:23:39.736 "ctrlr_data": { 00:23:39.736 "cntlid": 3, 00:23:39.736 "vendor_id": "0x8086", 00:23:39.736 "model_number": "SPDK bdev Controller", 00:23:39.736 "serial_number": "00000000000000000000", 00:23:39.736 "firmware_revision": "25.01", 00:23:39.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:39.736 "oacs": { 00:23:39.736 "security": 0, 00:23:39.736 "format": 0, 00:23:39.736 "firmware": 0, 00:23:39.736 "ns_manage": 0 00:23:39.736 }, 00:23:39.736 "multi_ctrlr": true, 00:23:39.736 "ana_reporting": false 00:23:39.736 }, 00:23:39.736 "vs": { 00:23:39.736 "nvme_version": "1.3" 00:23:39.736 }, 00:23:39.736 "ns_data": { 00:23:39.736 "id": 1, 00:23:39.736 "can_share": true 00:23:39.736 } 00:23:39.736 } 00:23:39.736 ], 00:23:39.736 "mp_policy": "active_passive" 00:23:39.736 } 00:23:39.736 } 00:23:39.736 ] 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.wDHHpezjlM 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:39.736 rmmod nvme_tcp 00:23:39.736 rmmod nvme_fabrics 00:23:39.736 rmmod nvme_keyring 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1402885 ']' 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1402885 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1402885 ']' 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1402885 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1402885 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1402885' 00:23:39.736 killing process with pid 1402885 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1402885 00:23:39.736 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1402885 00:23:39.995 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:39.995 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:39.995 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:39.995 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:39.995 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:39.995 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:39.995 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:39.995 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:39.995 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:39.995 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.995 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.995 10:51:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:42.535 00:23:42.535 real 0m5.687s 00:23:42.535 user 0m2.176s 00:23:42.535 sys 0m1.951s 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.535 ************************************ 00:23:42.535 END TEST nvmf_async_init 00:23:42.535 ************************************ 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.535 ************************************ 00:23:42.535 START TEST dma 00:23:42.535 ************************************ 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:42.535 * Looking for test storage... 00:23:42.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:42.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.535 --rc genhtml_branch_coverage=1 00:23:42.535 --rc genhtml_function_coverage=1 00:23:42.535 --rc genhtml_legend=1 00:23:42.535 --rc geninfo_all_blocks=1 00:23:42.535 --rc geninfo_unexecuted_blocks=1 00:23:42.535 00:23:42.535 ' 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:42.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.535 --rc genhtml_branch_coverage=1 00:23:42.535 --rc genhtml_function_coverage=1 00:23:42.535 --rc genhtml_legend=1 00:23:42.535 --rc geninfo_all_blocks=1 00:23:42.535 --rc geninfo_unexecuted_blocks=1 00:23:42.535 00:23:42.535 ' 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:42.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.535 --rc genhtml_branch_coverage=1 00:23:42.535 --rc genhtml_function_coverage=1 00:23:42.535 --rc genhtml_legend=1 00:23:42.535 --rc geninfo_all_blocks=1 00:23:42.535 --rc geninfo_unexecuted_blocks=1 00:23:42.535 00:23:42.535 ' 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:42.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.535 --rc genhtml_branch_coverage=1 00:23:42.535 --rc genhtml_function_coverage=1 00:23:42.535 --rc genhtml_legend=1 00:23:42.535 --rc geninfo_all_blocks=1 00:23:42.535 --rc geninfo_unexecuted_blocks=1 00:23:42.535 00:23:42.535 ' 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.535 10:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:42.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:42.536 00:23:42.536 real 0m0.146s 00:23:42.536 user 0m0.104s 00:23:42.536 sys 0m0.051s 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:42.536 ************************************ 00:23:42.536 END TEST dma 00:23:42.536 ************************************ 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.536 ************************************ 00:23:42.536 START TEST nvmf_identify 00:23:42.536 ************************************ 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:42.536 * Looking for test storage... 00:23:42.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:42.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.536 --rc genhtml_branch_coverage=1 00:23:42.536 --rc genhtml_function_coverage=1 00:23:42.536 --rc genhtml_legend=1 00:23:42.536 --rc geninfo_all_blocks=1 00:23:42.536 --rc geninfo_unexecuted_blocks=1 00:23:42.536 00:23:42.536 ' 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:42.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.536 --rc genhtml_branch_coverage=1 00:23:42.536 --rc genhtml_function_coverage=1 00:23:42.536 --rc genhtml_legend=1 00:23:42.536 --rc geninfo_all_blocks=1 00:23:42.536 --rc geninfo_unexecuted_blocks=1 00:23:42.536 00:23:42.536 ' 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:42.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.536 --rc genhtml_branch_coverage=1 00:23:42.536 --rc genhtml_function_coverage=1 00:23:42.536 --rc genhtml_legend=1 00:23:42.536 --rc geninfo_all_blocks=1 00:23:42.536 --rc geninfo_unexecuted_blocks=1 00:23:42.536 00:23:42.536 ' 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:42.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.536 --rc genhtml_branch_coverage=1 00:23:42.536 --rc genhtml_function_coverage=1 00:23:42.536 --rc genhtml_legend=1 00:23:42.536 --rc geninfo_all_blocks=1 00:23:42.536 --rc geninfo_unexecuted_blocks=1 00:23:42.536 00:23:42.536 ' 00:23:42.536 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:42.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:42.537 10:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:44.441 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:44.441 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.441 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:44.442 Found net devices under 0000:09:00.0: cvl_0_0 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:44.442 Found net devices under 0000:09:00.1: cvl_0_1 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.442 10:51:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:44.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:23:44.700 00:23:44.700 --- 10.0.0.2 ping statistics --- 00:23:44.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.700 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:23:44.700 00:23:44.700 --- 10.0.0.1 ping statistics --- 00:23:44.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.700 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1405121 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1405121 00:23:44.700 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1405121 ']' 00:23:44.701 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.701 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.701 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.701 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.701 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.701 [2024-11-19 10:51:32.217091] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:23:44.701 [2024-11-19 10:51:32.217200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.701 [2024-11-19 10:51:32.289685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:44.959 [2024-11-19 10:51:32.348636] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.959 [2024-11-19 10:51:32.348684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.959 [2024-11-19 10:51:32.348707] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.959 [2024-11-19 10:51:32.348718] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.959 [2024-11-19 10:51:32.348728] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.959 [2024-11-19 10:51:32.350233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.959 [2024-11-19 10:51:32.350354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.959 [2024-11-19 10:51:32.350383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:44.959 [2024-11-19 10:51:32.350386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.959 [2024-11-19 10:51:32.474094] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.959 Malloc0 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.959 [2024-11-19 10:51:32.565341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.959 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:45.220 [ 00:23:45.220 { 00:23:45.220 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:45.220 "subtype": "Discovery", 00:23:45.220 "listen_addresses": [ 00:23:45.220 { 00:23:45.220 "trtype": "TCP", 00:23:45.220 "adrfam": "IPv4", 00:23:45.220 "traddr": "10.0.0.2", 00:23:45.220 "trsvcid": "4420" 00:23:45.220 } 00:23:45.220 ], 00:23:45.220 "allow_any_host": true, 00:23:45.220 "hosts": [] 00:23:45.220 }, 00:23:45.220 { 00:23:45.220 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:45.220 "subtype": "NVMe", 00:23:45.220 "listen_addresses": [ 00:23:45.220 { 00:23:45.220 "trtype": "TCP", 00:23:45.220 "adrfam": "IPv4", 00:23:45.220 "traddr": "10.0.0.2", 00:23:45.220 "trsvcid": "4420" 00:23:45.220 } 00:23:45.220 ], 00:23:45.220 "allow_any_host": true, 00:23:45.220 "hosts": [], 00:23:45.220 "serial_number": "SPDK00000000000001", 00:23:45.220 "model_number": "SPDK bdev Controller", 00:23:45.220 "max_namespaces": 32, 00:23:45.220 "min_cntlid": 1, 00:23:45.220 "max_cntlid": 65519, 00:23:45.220 "namespaces": [ 00:23:45.220 { 00:23:45.220 "nsid": 1, 00:23:45.220 "bdev_name": "Malloc0", 00:23:45.220 "name": "Malloc0", 00:23:45.220 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:45.220 "eui64": "ABCDEF0123456789", 00:23:45.220 "uuid": "4729f783-805e-4d51-b6b9-d1b3868946c3" 00:23:45.220 } 00:23:45.220 ] 00:23:45.220 } 00:23:45.220 ] 00:23:45.220 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.220 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:45.220 [2024-11-19 10:51:32.605039] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:23:45.220 [2024-11-19 10:51:32.605088] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405156 ] 00:23:45.220 [2024-11-19 10:51:32.652415] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:45.220 [2024-11-19 10:51:32.652477] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:45.220 [2024-11-19 10:51:32.652488] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:45.220 [2024-11-19 10:51:32.652504] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:45.220 [2024-11-19 10:51:32.652520] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:45.220 [2024-11-19 10:51:32.656760] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:45.220 [2024-11-19 10:51:32.656820] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11ba690 0 00:23:45.220 [2024-11-19 10:51:32.667319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:45.220 [2024-11-19 10:51:32.667350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:45.220 [2024-11-19 10:51:32.667359] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:45.220 [2024-11-19 10:51:32.667365] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:45.221 [2024-11-19 10:51:32.667412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.221 [2024-11-19 10:51:32.667426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.221 [2024-11-19 10:51:32.667433] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ba690) 00:23:45.221 [2024-11-19 10:51:32.667451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:45.221 [2024-11-19 10:51:32.667477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c100, cid 0, qid 0 00:23:45.221 [2024-11-19 10:51:32.678316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.221 [2024-11-19 10:51:32.678333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.221 [2024-11-19 10:51:32.678341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.221 [2024-11-19 10:51:32.678348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c100) on tqpair=0x11ba690 00:23:45.221 [2024-11-19 10:51:32.678367] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:45.221 [2024-11-19 10:51:32.678380] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:45.221 [2024-11-19 10:51:32.678390] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:45.221 [2024-11-19 10:51:32.678413] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.221 [2024-11-19 10:51:32.678422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.221 [2024-11-19 10:51:32.678429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ba690) 00:23:45.221 [2024-11-19 10:51:32.678440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.221 [2024-11-19 10:51:32.678463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c100, cid 0, qid 0 00:23:45.221 [2024-11-19 10:51:32.678609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.221 [2024-11-19 10:51:32.678623] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.221 [2024-11-19 10:51:32.678630] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.221 [2024-11-19 10:51:32.678636] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c100) on tqpair=0x11ba690 00:23:45.221 [2024-11-19 10:51:32.678652] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:45.221 [2024-11-19 10:51:32.678666] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:45.221 [2024-11-19 10:51:32.678679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.221 [2024-11-19 10:51:32.678687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.221 [2024-11-19 10:51:32.678693] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ba690) 00:23:45.221 [2024-11-19 10:51:32.678704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.221 [2024-11-19 10:51:32.678726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c100, cid 0, qid 0 00:23:45.221 [2024-11-19 10:51:32.678807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.221 [2024-11-19 10:51:32.678821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.221 [2024-11-19 10:51:32.678828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.221 [2024-11-19 10:51:32.678834] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c100) on tqpair=0x11ba690 00:23:45.221 [2024-11-19 10:51:32.678844] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:45.221 [2024-11-19 10:51:32.678857] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:45.221 [2024-11-19 10:51:32.678878] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.221 [2024-11-19 10:51:32.678886] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.221 [2024-11-19 10:51:32.678892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ba690) 00:23:45.221 [2024-11-19 10:51:32.678902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.221 [2024-11-19 10:51:32.678924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c100, cid 0, qid 0 00:23:45.221 [2024-11-19 10:51:32.679015] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.221 [2024-11-19 10:51:32.679028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.221 [2024-11-19 10:51:32.679036] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.221 [2024-11-19 10:51:32.679042] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c100) on tqpair=0x11ba690 00:23:45.221 [2024-11-19 10:51:32.679051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:45.221 [2024-11-19 10:51:32.679068] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.221 [2024-11-19 10:51:32.679077] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.221 [2024-11-19 10:51:32.679083] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ba690) 00:23:45.221 [2024-11-19 10:51:32.679094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.221 [2024-11-19 10:51:32.679115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c100, cid 0, qid 0 00:23:45.221 [2024-11-19 10:51:32.679197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.221 [2024-11-19 10:51:32.679211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.221 [2024-11-19 10:51:32.679218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.221 [2024-11-19 10:51:32.679224] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c100) on tqpair=0x11ba690 00:23:45.221 [2024-11-19 10:51:32.679233] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:45.221 [2024-11-19 10:51:32.679251] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:45.221 [2024-11-19 10:51:32.679268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:45.221 [2024-11-19 10:51:32.679379] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:45.221 [2024-11-19 10:51:32.679390] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:45.221 [2024-11-19 10:51:32.679405] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.221 [2024-11-19 10:51:32.679413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.221 [2024-11-19 10:51:32.679419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ba690) 00:23:45.221 [2024-11-19 10:51:32.679429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.221 [2024-11-19 10:51:32.679465] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c100, cid 0, qid 0 00:23:45.221 [2024-11-19 10:51:32.679637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.221 [2024-11-19 10:51:32.679650] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.221 [2024-11-19 10:51:32.679656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.221 [2024-11-19 10:51:32.679663] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c100) on tqpair=0x11ba690 00:23:45.221 [2024-11-19 10:51:32.679671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:45.221 [2024-11-19 10:51:32.679687] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.221 [2024-11-19 10:51:32.679696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.221 [2024-11-19 10:51:32.679702] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ba690) 00:23:45.221 [2024-11-19 10:51:32.679713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.221 [2024-11-19 10:51:32.679734] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c100, cid 0, qid 0 00:23:45.221 [2024-11-19 10:51:32.679819] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.221 [2024-11-19 10:51:32.679832] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.221 [2024-11-19 10:51:32.679839] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.222 [2024-11-19 10:51:32.679845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c100) on tqpair=0x11ba690 00:23:45.222 [2024-11-19 10:51:32.679853] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:45.222 [2024-11-19 10:51:32.679861] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:45.222 [2024-11-19 10:51:32.679882] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:45.222 [2024-11-19 10:51:32.679904] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:45.222 [2024-11-19 10:51:32.679922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.222 [2024-11-19 10:51:32.679930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ba690) 00:23:45.222 [2024-11-19 10:51:32.679941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.222 [2024-11-19 10:51:32.679962] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c100, cid 0, qid 0 00:23:45.222 [2024-11-19 10:51:32.680086] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.222 [2024-11-19 10:51:32.680102] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.222 [2024-11-19 10:51:32.680110] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.222 [2024-11-19 10:51:32.680117] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11ba690): datao=0, datal=4096, cccid=0 00:23:45.222 [2024-11-19 10:51:32.680125] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x121c100) on tqpair(0x11ba690): expected_datao=0, payload_size=4096 00:23:45.222 [2024-11-19 10:51:32.680132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.222 [2024-11-19 10:51:32.680149] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.222 [2024-11-19 10:51:32.680159] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.222 [2024-11-19 10:51:32.720440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.222 [2024-11-19 10:51:32.720458] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.222 [2024-11-19 10:51:32.720466] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.222 [2024-11-19 10:51:32.720473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c100) on tqpair=0x11ba690 00:23:45.222 [2024-11-19 10:51:32.720487] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:45.222 [2024-11-19 10:51:32.720496] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:45.222 [2024-11-19 10:51:32.720503] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:45.222 [2024-11-19 10:51:32.720518] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:45.222 [2024-11-19 10:51:32.720528] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:45.222 [2024-11-19 10:51:32.720536] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:45.222 [2024-11-19 10:51:32.720555] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:45.222 [2024-11-19 10:51:32.720569] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.222 [2024-11-19 10:51:32.720577] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.222 [2024-11-19 10:51:32.720584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ba690) 00:23:45.222 [2024-11-19 10:51:32.720595] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:45.222 [2024-11-19 10:51:32.720619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c100, cid 0, qid 0 00:23:45.222 [2024-11-19 10:51:32.720710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.222 [2024-11-19 10:51:32.720724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.222 [2024-11-19 10:51:32.720731] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.222 [2024-11-19 10:51:32.720738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c100) on tqpair=0x11ba690 00:23:45.222 [2024-11-19 10:51:32.720750] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.222 [2024-11-19 10:51:32.720758] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.222 [2024-11-19 10:51:32.720764] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ba690) 00:23:45.222 [2024-11-19 10:51:32.720774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.222 [2024-11-19 10:51:32.720785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.222 [2024-11-19 10:51:32.720792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.222 [2024-11-19 10:51:32.720798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11ba690) 00:23:45.222 [2024-11-19 10:51:32.720811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.222 [2024-11-19 10:51:32.720822] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.222 [2024-11-19 10:51:32.720829] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.222 [2024-11-19 10:51:32.720835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11ba690) 00:23:45.222 [2024-11-19 10:51:32.720844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.222 [2024-11-19 10:51:32.720853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.222 [2024-11-19 10:51:32.720860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.222 [2024-11-19 10:51:32.720866] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ba690) 00:23:45.222 [2024-11-19 10:51:32.720874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.222 [2024-11-19 10:51:32.720883] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:45.222 [2024-11-19 10:51:32.720922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:45.222 [2024-11-19 10:51:32.720934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.222 [2024-11-19 10:51:32.720941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11ba690) 00:23:45.222 [2024-11-19 10:51:32.720951] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.222 [2024-11-19 10:51:32.720973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c100, cid 0, qid 0 00:23:45.222 [2024-11-19 10:51:32.720999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c280, cid 1, qid 0 00:23:45.222 [2024-11-19 10:51:32.721008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c400, cid 2, qid 0 00:23:45.222 [2024-11-19 10:51:32.721015] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c580, cid 3, qid 0 00:23:45.222 [2024-11-19 10:51:32.721022] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c700, cid 4, qid 0 00:23:45.222 [2024-11-19 10:51:32.721155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.222 [2024-11-19 10:51:32.721169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.222 [2024-11-19 10:51:32.721175] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.222 [2024-11-19 10:51:32.721182] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c700) on tqpair=0x11ba690 00:23:45.222 [2024-11-19 10:51:32.721196] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:45.222 [2024-11-19 10:51:32.721207] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:45.222 [2024-11-19 10:51:32.721226] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.222 [2024-11-19 10:51:32.721235] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11ba690) 00:23:45.222 [2024-11-19 10:51:32.721246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.222 [2024-11-19 10:51:32.721267] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c700, cid 4, qid 0 00:23:45.222 [2024-11-19 10:51:32.725317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.223 [2024-11-19 10:51:32.725333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.223 [2024-11-19 10:51:32.725339] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.725346] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11ba690): datao=0, datal=4096, cccid=4 00:23:45.223 [2024-11-19 10:51:32.725357] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x121c700) on tqpair(0x11ba690): expected_datao=0, payload_size=4096 00:23:45.223 [2024-11-19 10:51:32.725365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.725375] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.725382] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.725390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.223 [2024-11-19 10:51:32.725399] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.223 [2024-11-19 10:51:32.725405] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.725412] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c700) on tqpair=0x11ba690 00:23:45.223 [2024-11-19 10:51:32.725432] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:45.223 [2024-11-19 10:51:32.725468] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.725478] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11ba690) 00:23:45.223 [2024-11-19 10:51:32.725489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.223 [2024-11-19 10:51:32.725500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.725507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.725514] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11ba690) 00:23:45.223 [2024-11-19 10:51:32.725522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.223 [2024-11-19 10:51:32.725550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c700, cid 4, qid 0 00:23:45.223 [2024-11-19 10:51:32.725576] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c880, cid 5, qid 0 00:23:45.223 [2024-11-19 10:51:32.725728] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.223 [2024-11-19 10:51:32.725740] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.223 [2024-11-19 10:51:32.725747] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.725753] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11ba690): datao=0, datal=1024, cccid=4 00:23:45.223 [2024-11-19 10:51:32.725760] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x121c700) on tqpair(0x11ba690): expected_datao=0, payload_size=1024 00:23:45.223 [2024-11-19 10:51:32.725767] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.725776] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.725784] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.725792] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.223 [2024-11-19 10:51:32.725801] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.223 [2024-11-19 10:51:32.725807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.725813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c880) on tqpair=0x11ba690 00:23:45.223 [2024-11-19 10:51:32.768313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.223 [2024-11-19 10:51:32.768346] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.223 [2024-11-19 10:51:32.768354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.768361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c700) on tqpair=0x11ba690 00:23:45.223 [2024-11-19 10:51:32.768380] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.768388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11ba690) 00:23:45.223 [2024-11-19 10:51:32.768404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.223 [2024-11-19 10:51:32.768436] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c700, cid 4, qid 0 00:23:45.223 [2024-11-19 10:51:32.768587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.223 [2024-11-19 10:51:32.768612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.223 [2024-11-19 10:51:32.768619] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.768625] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11ba690): datao=0, datal=3072, cccid=4 00:23:45.223 [2024-11-19 10:51:32.768633] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x121c700) on tqpair(0x11ba690): expected_datao=0, payload_size=3072 00:23:45.223 [2024-11-19 10:51:32.768640] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.768651] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.768658] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.768670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.223 [2024-11-19 10:51:32.768680] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.223 [2024-11-19 10:51:32.768687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.768693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c700) on tqpair=0x11ba690 00:23:45.223 [2024-11-19 10:51:32.768709] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.768719] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11ba690) 00:23:45.223 [2024-11-19 10:51:32.768730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.223 [2024-11-19 10:51:32.768759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c700, cid 4, qid 0 00:23:45.223 [2024-11-19 10:51:32.768855] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.223 [2024-11-19 10:51:32.768866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.223 [2024-11-19 10:51:32.768873] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.768879] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11ba690): datao=0, datal=8, cccid=4 00:23:45.223 [2024-11-19 10:51:32.768887] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x121c700) on tqpair(0x11ba690): expected_datao=0, payload_size=8 00:23:45.223 [2024-11-19 10:51:32.768894] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.768904] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.768911] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.809473] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.223 [2024-11-19 10:51:32.809492] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.223 [2024-11-19 10:51:32.809499] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.223 [2024-11-19 10:51:32.809506] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c700) on tqpair=0x11ba690 00:23:45.223 ===================================================== 00:23:45.223 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:45.223 ===================================================== 00:23:45.223 Controller Capabilities/Features 00:23:45.223 ================================ 00:23:45.223 Vendor ID: 0000 00:23:45.223 Subsystem Vendor ID: 0000 00:23:45.223 Serial Number: .................... 00:23:45.223 Model Number: ........................................ 00:23:45.223 Firmware Version: 25.01 00:23:45.223 Recommended Arb Burst: 0 00:23:45.223 IEEE OUI Identifier: 00 00 00 00:23:45.223 Multi-path I/O 00:23:45.223 May have multiple subsystem ports: No 00:23:45.223 May have multiple controllers: No 00:23:45.223 Associated with SR-IOV VF: No 00:23:45.223 Max Data Transfer Size: 131072 00:23:45.223 Max Number of Namespaces: 0 00:23:45.223 Max Number of I/O Queues: 1024 00:23:45.223 NVMe Specification Version (VS): 1.3 00:23:45.223 NVMe Specification Version (Identify): 1.3 00:23:45.223 Maximum Queue Entries: 128 00:23:45.223 Contiguous Queues Required: Yes 00:23:45.223 Arbitration Mechanisms Supported 00:23:45.223 Weighted Round Robin: Not Supported 00:23:45.223 Vendor Specific: Not Supported 00:23:45.223 Reset Timeout: 15000 ms 00:23:45.223 Doorbell Stride: 4 bytes 00:23:45.223 NVM Subsystem Reset: Not Supported 00:23:45.223 Command Sets Supported 00:23:45.223 NVM Command Set: Supported 00:23:45.223 Boot Partition: Not Supported 00:23:45.223 Memory Page Size Minimum: 4096 bytes 00:23:45.224 Memory Page Size Maximum: 4096 bytes 00:23:45.224 Persistent Memory Region: Not Supported 00:23:45.224 Optional Asynchronous Events Supported 00:23:45.224 Namespace Attribute Notices: Not Supported 00:23:45.224 Firmware Activation Notices: Not Supported 00:23:45.224 ANA Change Notices: Not Supported 00:23:45.224 PLE Aggregate Log Change Notices: Not Supported 00:23:45.224 LBA Status Info Alert Notices: Not Supported 00:23:45.224 EGE Aggregate Log Change Notices: Not Supported 00:23:45.224 Normal NVM Subsystem Shutdown event: Not Supported 00:23:45.224 Zone Descriptor Change Notices: Not Supported 00:23:45.224 Discovery Log Change Notices: Supported 00:23:45.224 Controller Attributes 00:23:45.224 128-bit Host Identifier: Not Supported 00:23:45.224 Non-Operational Permissive Mode: Not Supported 00:23:45.224 NVM Sets: Not Supported 00:23:45.224 Read Recovery Levels: Not Supported 00:23:45.224 Endurance Groups: Not Supported 00:23:45.224 Predictable Latency Mode: Not Supported 00:23:45.224 Traffic Based Keep ALive: Not Supported 00:23:45.224 Namespace Granularity: Not Supported 00:23:45.224 SQ Associations: Not Supported 00:23:45.224 UUID List: Not Supported 00:23:45.224 Multi-Domain Subsystem: Not Supported 00:23:45.224 Fixed Capacity Management: Not Supported 00:23:45.224 Variable Capacity Management: Not Supported 00:23:45.224 Delete Endurance Group: Not Supported 00:23:45.224 Delete NVM Set: Not Supported 00:23:45.224 Extended LBA Formats Supported: Not Supported 00:23:45.224 Flexible Data Placement Supported: Not Supported 00:23:45.224 00:23:45.224 Controller Memory Buffer Support 00:23:45.224 ================================ 00:23:45.224 Supported: No 00:23:45.224 00:23:45.224 Persistent Memory Region Support 00:23:45.224 ================================ 00:23:45.224 Supported: No 00:23:45.224 00:23:45.224 Admin Command Set Attributes 00:23:45.224 ============================ 00:23:45.224 Security Send/Receive: Not Supported 00:23:45.224 Format NVM: Not Supported 00:23:45.224 Firmware Activate/Download: Not Supported 00:23:45.224 Namespace Management: Not Supported 00:23:45.224 Device Self-Test: Not Supported 00:23:45.224 Directives: Not Supported 00:23:45.224 NVMe-MI: Not Supported 00:23:45.224 Virtualization Management: Not Supported 00:23:45.224 Doorbell Buffer Config: Not Supported 00:23:45.224 Get LBA Status Capability: Not Supported 00:23:45.224 Command & Feature Lockdown Capability: Not Supported 00:23:45.224 Abort Command Limit: 1 00:23:45.224 Async Event Request Limit: 4 00:23:45.224 Number of Firmware Slots: N/A 00:23:45.224 Firmware Slot 1 Read-Only: N/A 00:23:45.224 Firmware Activation Without Reset: N/A 00:23:45.224 Multiple Update Detection Support: N/A 00:23:45.224 Firmware Update Granularity: No Information Provided 00:23:45.224 Per-Namespace SMART Log: No 00:23:45.224 Asymmetric Namespace Access Log Page: Not Supported 00:23:45.224 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:45.224 Command Effects Log Page: Not Supported 00:23:45.224 Get Log Page Extended Data: Supported 00:23:45.224 Telemetry Log Pages: Not Supported 00:23:45.224 Persistent Event Log Pages: Not Supported 00:23:45.224 Supported Log Pages Log Page: May Support 00:23:45.224 Commands Supported & Effects Log Page: Not Supported 00:23:45.224 Feature Identifiers & Effects Log Page:May Support 00:23:45.224 NVMe-MI Commands & Effects Log Page: May Support 00:23:45.224 Data Area 4 for Telemetry Log: Not Supported 00:23:45.224 Error Log Page Entries Supported: 128 00:23:45.224 Keep Alive: Not Supported 00:23:45.224 00:23:45.224 NVM Command Set Attributes 00:23:45.224 ========================== 00:23:45.224 Submission Queue Entry Size 00:23:45.224 Max: 1 00:23:45.224 Min: 1 00:23:45.224 Completion Queue Entry Size 00:23:45.224 Max: 1 00:23:45.224 Min: 1 00:23:45.224 Number of Namespaces: 0 00:23:45.224 Compare Command: Not Supported 00:23:45.224 Write Uncorrectable Command: Not Supported 00:23:45.224 Dataset Management Command: Not Supported 00:23:45.224 Write Zeroes Command: Not Supported 00:23:45.224 Set Features Save Field: Not Supported 00:23:45.224 Reservations: Not Supported 00:23:45.224 Timestamp: Not Supported 00:23:45.224 Copy: Not Supported 00:23:45.224 Volatile Write Cache: Not Present 00:23:45.224 Atomic Write Unit (Normal): 1 00:23:45.224 Atomic Write Unit (PFail): 1 00:23:45.224 Atomic Compare & Write Unit: 1 00:23:45.224 Fused Compare & Write: Supported 00:23:45.224 Scatter-Gather List 00:23:45.224 SGL Command Set: Supported 00:23:45.224 SGL Keyed: Supported 00:23:45.224 SGL Bit Bucket Descriptor: Not Supported 00:23:45.224 SGL Metadata Pointer: Not Supported 00:23:45.224 Oversized SGL: Not Supported 00:23:45.224 SGL Metadata Address: Not Supported 00:23:45.224 SGL Offset: Supported 00:23:45.224 Transport SGL Data Block: Not Supported 00:23:45.224 Replay Protected Memory Block: Not Supported 00:23:45.224 00:23:45.224 Firmware Slot Information 00:23:45.224 ========================= 00:23:45.224 Active slot: 0 00:23:45.224 00:23:45.224 00:23:45.224 Error Log 00:23:45.224 ========= 00:23:45.224 00:23:45.224 Active Namespaces 00:23:45.224 ================= 00:23:45.224 Discovery Log Page 00:23:45.224 ================== 00:23:45.224 Generation Counter: 2 00:23:45.224 Number of Records: 2 00:23:45.224 Record Format: 0 00:23:45.224 00:23:45.224 Discovery Log Entry 0 00:23:45.224 ---------------------- 00:23:45.224 Transport Type: 3 (TCP) 00:23:45.224 Address Family: 1 (IPv4) 00:23:45.224 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:45.224 Entry Flags: 00:23:45.224 Duplicate Returned Information: 1 00:23:45.224 Explicit Persistent Connection Support for Discovery: 1 00:23:45.224 Transport Requirements: 00:23:45.224 Secure Channel: Not Required 00:23:45.224 Port ID: 0 (0x0000) 00:23:45.224 Controller ID: 65535 (0xffff) 00:23:45.224 Admin Max SQ Size: 128 00:23:45.224 Transport Service Identifier: 4420 00:23:45.224 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:45.224 Transport Address: 10.0.0.2 00:23:45.224 Discovery Log Entry 1 00:23:45.224 ---------------------- 00:23:45.224 Transport Type: 3 (TCP) 00:23:45.224 Address Family: 1 (IPv4) 00:23:45.224 Subsystem Type: 2 (NVM Subsystem) 00:23:45.224 Entry Flags: 00:23:45.224 Duplicate Returned Information: 0 00:23:45.224 Explicit Persistent Connection Support for Discovery: 0 00:23:45.224 Transport Requirements: 00:23:45.224 Secure Channel: Not Required 00:23:45.224 Port ID: 0 (0x0000) 00:23:45.224 Controller ID: 65535 (0xffff) 00:23:45.224 Admin Max SQ Size: 128 00:23:45.224 Transport Service Identifier: 4420 00:23:45.224 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:45.224 Transport Address: 10.0.0.2 [2024-11-19 10:51:32.809631] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:45.224 [2024-11-19 10:51:32.809654] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c100) on tqpair=0x11ba690 00:23:45.224 [2024-11-19 10:51:32.809667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.224 [2024-11-19 10:51:32.809676] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c280) on tqpair=0x11ba690 00:23:45.224 [2024-11-19 10:51:32.809684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.224 [2024-11-19 10:51:32.809696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c400) on tqpair=0x11ba690 00:23:45.225 [2024-11-19 10:51:32.809705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.225 [2024-11-19 10:51:32.809712] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c580) on tqpair=0x11ba690 00:23:45.225 [2024-11-19 10:51:32.809720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.225 [2024-11-19 10:51:32.809738] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.809764] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.809771] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ba690) 00:23:45.225 [2024-11-19 10:51:32.809782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.225 [2024-11-19 10:51:32.809807] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c580, cid 3, qid 0 00:23:45.225 [2024-11-19 10:51:32.809985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.225 [2024-11-19 10:51:32.810000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.225 [2024-11-19 10:51:32.810007] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.810013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c580) on tqpair=0x11ba690 00:23:45.225 [2024-11-19 10:51:32.810026] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.810033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.810040] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ba690) 00:23:45.225 [2024-11-19 10:51:32.810051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.225 [2024-11-19 10:51:32.810078] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c580, cid 3, qid 0 00:23:45.225 [2024-11-19 10:51:32.810168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.225 [2024-11-19 10:51:32.810182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.225 [2024-11-19 10:51:32.810189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.810195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c580) on tqpair=0x11ba690 00:23:45.225 [2024-11-19 10:51:32.810204] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:45.225 [2024-11-19 10:51:32.810212] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:45.225 [2024-11-19 10:51:32.810228] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.810237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.810244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ba690) 00:23:45.225 [2024-11-19 10:51:32.810254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.225 [2024-11-19 10:51:32.810275] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c580, cid 3, qid 0 00:23:45.225 [2024-11-19 10:51:32.810378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.225 [2024-11-19 10:51:32.810394] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.225 [2024-11-19 10:51:32.810400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.810407] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c580) on tqpair=0x11ba690 00:23:45.225 [2024-11-19 10:51:32.810424] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.810434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.810444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ba690) 00:23:45.225 [2024-11-19 10:51:32.810456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.225 [2024-11-19 10:51:32.810477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c580, cid 3, qid 0 00:23:45.225 [2024-11-19 10:51:32.810556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.225 [2024-11-19 10:51:32.810569] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.225 [2024-11-19 10:51:32.810576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.810583] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c580) on tqpair=0x11ba690 00:23:45.225 [2024-11-19 10:51:32.810599] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.810609] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.810615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ba690) 00:23:45.225 [2024-11-19 10:51:32.810625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.225 [2024-11-19 10:51:32.810646] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c580, cid 3, qid 0 00:23:45.225 [2024-11-19 10:51:32.810723] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.225 [2024-11-19 10:51:32.810735] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.225 [2024-11-19 10:51:32.810742] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.810748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c580) on tqpair=0x11ba690 00:23:45.225 [2024-11-19 10:51:32.810764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.810774] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.810780] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ba690) 00:23:45.225 [2024-11-19 10:51:32.810790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.225 [2024-11-19 10:51:32.810811] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c580, cid 3, qid 0 00:23:45.225 [2024-11-19 10:51:32.810904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.225 [2024-11-19 10:51:32.810917] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.225 [2024-11-19 10:51:32.810924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.810931] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c580) on tqpair=0x11ba690 00:23:45.225 [2024-11-19 10:51:32.810947] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.810957] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.810963] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ba690) 00:23:45.225 [2024-11-19 10:51:32.810974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.225 [2024-11-19 10:51:32.810994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c580, cid 3, qid 0 00:23:45.225 [2024-11-19 10:51:32.811072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.225 [2024-11-19 10:51:32.811086] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.225 [2024-11-19 10:51:32.811093] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.811099] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c580) on tqpair=0x11ba690 00:23:45.225 [2024-11-19 10:51:32.811116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.811125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.811132] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ba690) 00:23:45.225 [2024-11-19 10:51:32.811146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.225 [2024-11-19 10:51:32.811168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c580, cid 3, qid 0 00:23:45.225 [2024-11-19 10:51:32.811246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.225 [2024-11-19 10:51:32.811259] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.225 [2024-11-19 10:51:32.811265] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.811272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c580) on tqpair=0x11ba690 00:23:45.225 [2024-11-19 10:51:32.811288] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.811297] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.811312] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ba690) 00:23:45.225 [2024-11-19 10:51:32.811323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.225 [2024-11-19 10:51:32.811345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c580, cid 3, qid 0 00:23:45.225 [2024-11-19 10:51:32.811439] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.225 [2024-11-19 10:51:32.811452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.225 [2024-11-19 10:51:32.811459] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.225 [2024-11-19 10:51:32.811466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c580) on tqpair=0x11ba690 00:23:45.226 [2024-11-19 10:51:32.811482] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.226 [2024-11-19 10:51:32.811492] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.226 [2024-11-19 10:51:32.811498] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ba690) 00:23:45.226 [2024-11-19 10:51:32.811508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.226 [2024-11-19 10:51:32.811529] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c580, cid 3, qid 0 00:23:45.226 [2024-11-19 10:51:32.811612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.226 [2024-11-19 10:51:32.811624] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.226 [2024-11-19 10:51:32.811630] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.226 [2024-11-19 10:51:32.811637] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c580) on tqpair=0x11ba690 00:23:45.226 [2024-11-19 10:51:32.811653] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.226 [2024-11-19 10:51:32.811662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.226 [2024-11-19 10:51:32.811668] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ba690) 00:23:45.226 [2024-11-19 10:51:32.811678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.226 [2024-11-19 10:51:32.811699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c580, cid 3, qid 0 00:23:45.226 [2024-11-19 10:51:32.811775] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.226 [2024-11-19 10:51:32.811789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.226 [2024-11-19 10:51:32.811796] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.226 [2024-11-19 10:51:32.811802] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c580) on tqpair=0x11ba690 00:23:45.226 [2024-11-19 10:51:32.811819] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.226 [2024-11-19 10:51:32.811828] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.226 [2024-11-19 10:51:32.811834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ba690) 00:23:45.226 [2024-11-19 10:51:32.811845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.226 [2024-11-19 10:51:32.811870] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c580, cid 3, qid 0 00:23:45.226 [2024-11-19 10:51:32.811940] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.226 [2024-11-19 10:51:32.811952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.226 [2024-11-19 10:51:32.811959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.226 [2024-11-19 10:51:32.811966] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c580) on tqpair=0x11ba690 00:23:45.226 [2024-11-19 10:51:32.811981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.226 [2024-11-19 10:51:32.811991] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.226 [2024-11-19 10:51:32.811997] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ba690) 00:23:45.226 [2024-11-19 10:51:32.812007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.226 [2024-11-19 10:51:32.812027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c580, cid 3, qid 0 00:23:45.226 [2024-11-19 10:51:32.812104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.226 [2024-11-19 10:51:32.812118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.226 [2024-11-19 10:51:32.812125] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.226 [2024-11-19 10:51:32.812131] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c580) on tqpair=0x11ba690 00:23:45.226 [2024-11-19 10:51:32.812148] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.226 [2024-11-19 10:51:32.812157] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.226 [2024-11-19 10:51:32.812163] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ba690) 00:23:45.226 [2024-11-19 10:51:32.812174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.226 [2024-11-19 10:51:32.812194] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c580, cid 3, qid 0 00:23:45.226 [2024-11-19 10:51:32.812284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.226 [2024-11-19 10:51:32.812296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.226 [2024-11-19 10:51:32.816328] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.226 [2024-11-19 10:51:32.816340] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c580) on tqpair=0x11ba690 00:23:45.226 [2024-11-19 10:51:32.816358] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.226 [2024-11-19 10:51:32.816368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.226 [2024-11-19 10:51:32.816375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ba690) 00:23:45.226 [2024-11-19 10:51:32.816386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.226 [2024-11-19 10:51:32.816407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x121c580, cid 3, qid 0 00:23:45.226 [2024-11-19 10:51:32.816523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.226 [2024-11-19 10:51:32.816535] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.226 [2024-11-19 10:51:32.816542] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.226 [2024-11-19 10:51:32.816549] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x121c580) on tqpair=0x11ba690 00:23:45.226 [2024-11-19 10:51:32.816563] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:23:45.226 00:23:45.226 10:51:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:45.487 [2024-11-19 10:51:32.850437] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:23:45.487 [2024-11-19 10:51:32.850483] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405162 ] 00:23:45.487 [2024-11-19 10:51:32.899054] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:45.487 [2024-11-19 10:51:32.899105] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:45.487 [2024-11-19 10:51:32.899115] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:45.487 [2024-11-19 10:51:32.899129] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:45.488 [2024-11-19 10:51:32.899142] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:45.488 [2024-11-19 10:51:32.902570] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:45.488 [2024-11-19 10:51:32.902628] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x786690 0 00:23:45.488 [2024-11-19 10:51:32.910324] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:45.488 [2024-11-19 10:51:32.910343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:45.488 [2024-11-19 10:51:32.910351] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:45.488 [2024-11-19 10:51:32.910357] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:45.488 [2024-11-19 10:51:32.910405] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.488 [2024-11-19 10:51:32.910418] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.488 [2024-11-19 10:51:32.910425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x786690) 00:23:45.488 [2024-11-19 10:51:32.910439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:45.488 [2024-11-19 10:51:32.910467] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8100, cid 0, qid 0 00:23:45.488 [2024-11-19 10:51:32.918316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.488 [2024-11-19 10:51:32.918334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.488 [2024-11-19 10:51:32.918341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.488 [2024-11-19 10:51:32.918348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8100) on tqpair=0x786690 00:23:45.488 [2024-11-19 10:51:32.918366] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:45.488 [2024-11-19 10:51:32.918378] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:45.488 [2024-11-19 10:51:32.918388] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:45.488 [2024-11-19 10:51:32.918406] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.488 [2024-11-19 10:51:32.918415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.488 [2024-11-19 10:51:32.918422] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x786690) 00:23:45.488 [2024-11-19 10:51:32.918433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.488 [2024-11-19 10:51:32.918458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8100, cid 0, qid 0 00:23:45.488 [2024-11-19 10:51:32.918551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.488 [2024-11-19 10:51:32.918563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.488 [2024-11-19 10:51:32.918570] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.488 [2024-11-19 10:51:32.918591] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8100) on tqpair=0x786690 00:23:45.488 [2024-11-19 10:51:32.918600] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:45.488 [2024-11-19 10:51:32.918614] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:45.488 [2024-11-19 10:51:32.918626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.488 [2024-11-19 10:51:32.918634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.488 [2024-11-19 10:51:32.918640] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x786690) 00:23:45.488 [2024-11-19 10:51:32.918650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.488 [2024-11-19 10:51:32.918672] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8100, cid 0, qid 0 00:23:45.488 [2024-11-19 10:51:32.918769] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.488 [2024-11-19 10:51:32.918783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.488 [2024-11-19 10:51:32.918790] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.488 [2024-11-19 10:51:32.918797] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8100) on tqpair=0x786690 00:23:45.488 [2024-11-19 10:51:32.918805] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:45.488 [2024-11-19 10:51:32.918819] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:45.488 [2024-11-19 10:51:32.918831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.488 [2024-11-19 10:51:32.918839] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.488 [2024-11-19 10:51:32.918845] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x786690) 00:23:45.488 [2024-11-19 10:51:32.918855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.488 [2024-11-19 10:51:32.918877] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8100, cid 0, qid 0 00:23:45.488 [2024-11-19 10:51:32.918962] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.488 [2024-11-19 10:51:32.918976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.488 [2024-11-19 10:51:32.918983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.488 [2024-11-19 10:51:32.918990] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8100) on tqpair=0x786690 00:23:45.488 [2024-11-19 10:51:32.918998] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:45.488 [2024-11-19 10:51:32.919014] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.488 [2024-11-19 10:51:32.919023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.488 [2024-11-19 10:51:32.919029] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x786690) 00:23:45.488 [2024-11-19 10:51:32.919040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.488 [2024-11-19 10:51:32.919061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8100, cid 0, qid 0 00:23:45.488 [2024-11-19 10:51:32.919139] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.488 [2024-11-19 10:51:32.919153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.488 [2024-11-19 10:51:32.919160] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.488 [2024-11-19 10:51:32.919166] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8100) on tqpair=0x786690 00:23:45.488 [2024-11-19 10:51:32.919174] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:45.488 [2024-11-19 10:51:32.919186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:45.488 [2024-11-19 10:51:32.919200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:45.488 [2024-11-19 10:51:32.919310] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:45.488 [2024-11-19 10:51:32.919320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:45.488 [2024-11-19 10:51:32.919333] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.488 [2024-11-19 10:51:32.919340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.488 [2024-11-19 10:51:32.919347] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x786690) 00:23:45.488 [2024-11-19 10:51:32.919357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.488 [2024-11-19 10:51:32.919379] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8100, cid 0, qid 0 00:23:45.488 [2024-11-19 10:51:32.919475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.488 [2024-11-19 10:51:32.919490] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.488 [2024-11-19 10:51:32.919496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.488 [2024-11-19 10:51:32.919503] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8100) on tqpair=0x786690 00:23:45.488 [2024-11-19 10:51:32.919511] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:45.488 [2024-11-19 10:51:32.919527] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.488 [2024-11-19 10:51:32.919536] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.488 [2024-11-19 10:51:32.919543] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x786690) 00:23:45.488 [2024-11-19 10:51:32.919553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.488 [2024-11-19 10:51:32.919574] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8100, cid 0, qid 0 00:23:45.488 [2024-11-19 10:51:32.919667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.488 [2024-11-19 10:51:32.919681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.489 [2024-11-19 10:51:32.919688] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.489 [2024-11-19 10:51:32.919695] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8100) on tqpair=0x786690 00:23:45.489 [2024-11-19 10:51:32.919702] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:45.489 [2024-11-19 10:51:32.919710] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:45.489 [2024-11-19 10:51:32.919723] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:45.489 [2024-11-19 10:51:32.919742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:45.489 [2024-11-19 10:51:32.919756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.489 [2024-11-19 10:51:32.919764] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x786690) 00:23:45.489 [2024-11-19 10:51:32.919774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.489 [2024-11-19 10:51:32.919796] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8100, cid 0, qid 0 00:23:45.489 [2024-11-19 10:51:32.919924] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.489 [2024-11-19 10:51:32.919943] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.489 [2024-11-19 10:51:32.919951] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.489 [2024-11-19 10:51:32.919957] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x786690): datao=0, datal=4096, cccid=0 00:23:45.489 [2024-11-19 10:51:32.919965] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7e8100) on tqpair(0x786690): expected_datao=0, payload_size=4096 00:23:45.489 [2024-11-19 10:51:32.919972] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.489 [2024-11-19 10:51:32.919989] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.489 [2024-11-19 10:51:32.919998] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.489 [2024-11-19 10:51:32.960373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.489 [2024-11-19 10:51:32.960393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.489 [2024-11-19 10:51:32.960400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.489 [2024-11-19 10:51:32.960407] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8100) on tqpair=0x786690 00:23:45.489 [2024-11-19 10:51:32.960418] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:45.489 [2024-11-19 10:51:32.960426] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:45.489 [2024-11-19 10:51:32.960434] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:45.489 [2024-11-19 10:51:32.960446] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:45.489 [2024-11-19 10:51:32.960455] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:45.489 [2024-11-19 10:51:32.960463] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:45.489 [2024-11-19 10:51:32.960484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:45.489 [2024-11-19 10:51:32.960498] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.489 [2024-11-19 10:51:32.960505] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.489 [2024-11-19 10:51:32.960512] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x786690) 00:23:45.489 [2024-11-19 10:51:32.960524] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:45.489 [2024-11-19 10:51:32.960547] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8100, cid 0, qid 0 00:23:45.489 [2024-11-19 10:51:32.960638] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.489 [2024-11-19 10:51:32.960652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.489 [2024-11-19 10:51:32.960660] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.489 [2024-11-19 10:51:32.960666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8100) on tqpair=0x786690 00:23:45.489 [2024-11-19 10:51:32.960677] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.489 [2024-11-19 10:51:32.960684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.489 [2024-11-19 10:51:32.960691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x786690) 00:23:45.489 [2024-11-19 10:51:32.960701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.489 [2024-11-19 10:51:32.960712] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.489 [2024-11-19 10:51:32.960719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.489 [2024-11-19 10:51:32.960725] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x786690) 00:23:45.489 [2024-11-19 10:51:32.960734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.489 [2024-11-19 10:51:32.960748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.489 [2024-11-19 10:51:32.960756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.489 [2024-11-19 10:51:32.960762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x786690) 00:23:45.489 [2024-11-19 10:51:32.960771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.489 [2024-11-19 10:51:32.960781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.489 [2024-11-19 10:51:32.960788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.489 [2024-11-19 10:51:32.960794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x786690) 00:23:45.489 [2024-11-19 10:51:32.960802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.489 [2024-11-19 10:51:32.960811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:45.489 [2024-11-19 10:51:32.960826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:45.489 [2024-11-19 10:51:32.960837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.489 [2024-11-19 10:51:32.960844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x786690) 00:23:45.489 [2024-11-19 10:51:32.960855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.489 [2024-11-19 10:51:32.960877] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8100, cid 0, qid 0 00:23:45.489 [2024-11-19 10:51:32.960888] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8280, cid 1, qid 0 00:23:45.489 [2024-11-19 10:51:32.960896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8400, cid 2, qid 0 00:23:45.489 [2024-11-19 10:51:32.960904] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8580, cid 3, qid 0 00:23:45.489 [2024-11-19 10:51:32.960911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8700, cid 4, qid 0 00:23:45.489 [2024-11-19 10:51:32.961030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.489 [2024-11-19 10:51:32.961042] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.489 [2024-11-19 10:51:32.961049] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.489 [2024-11-19 10:51:32.961055] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8700) on tqpair=0x786690 00:23:45.489 [2024-11-19 10:51:32.961068] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:45.489 [2024-11-19 10:51:32.961078] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:45.489 [2024-11-19 10:51:32.961092] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:45.489 [2024-11-19 10:51:32.961103] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:45.489 [2024-11-19 10:51:32.961113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.489 [2024-11-19 10:51:32.961121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.489 [2024-11-19 10:51:32.961127] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x786690) 00:23:45.489 [2024-11-19 10:51:32.961138] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:45.489 [2024-11-19 10:51:32.961159] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8700, cid 4, qid 0 00:23:45.489 [2024-11-19 10:51:32.961253] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.489 [2024-11-19 10:51:32.961266] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.490 [2024-11-19 10:51:32.961273] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.490 [2024-11-19 10:51:32.961279] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8700) on tqpair=0x786690 00:23:45.490 [2024-11-19 10:51:32.965356] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:45.490 [2024-11-19 10:51:32.965382] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:45.490 [2024-11-19 10:51:32.965413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.490 [2024-11-19 10:51:32.965420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x786690) 00:23:45.490 [2024-11-19 10:51:32.965431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.490 [2024-11-19 10:51:32.965454] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8700, cid 4, qid 0 00:23:45.490 [2024-11-19 10:51:32.965567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.490 [2024-11-19 10:51:32.965583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.490 [2024-11-19 10:51:32.965589] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.490 [2024-11-19 10:51:32.965595] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x786690): datao=0, datal=4096, cccid=4 00:23:45.490 [2024-11-19 10:51:32.965603] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7e8700) on tqpair(0x786690): expected_datao=0, payload_size=4096 00:23:45.490 [2024-11-19 10:51:32.965610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.490 [2024-11-19 10:51:32.965621] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.490 [2024-11-19 10:51:32.965628] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.490 [2024-11-19 10:51:33.006390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.490 [2024-11-19 10:51:33.006409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.490 [2024-11-19 10:51:33.006416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.490 [2024-11-19 10:51:33.006423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8700) on tqpair=0x786690 00:23:45.490 [2024-11-19 10:51:33.006441] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:45.490 [2024-11-19 10:51:33.006460] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:45.490 [2024-11-19 10:51:33.006479] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:45.490 [2024-11-19 10:51:33.006494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.490 [2024-11-19 10:51:33.006501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x786690) 00:23:45.490 [2024-11-19 10:51:33.006513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.490 [2024-11-19 10:51:33.006536] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8700, cid 4, qid 0 00:23:45.490 [2024-11-19 10:51:33.006644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.490 [2024-11-19 10:51:33.006657] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.490 [2024-11-19 10:51:33.006663] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.490 [2024-11-19 10:51:33.006670] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x786690): datao=0, datal=4096, cccid=4 00:23:45.490 [2024-11-19 10:51:33.006677] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7e8700) on tqpair(0x786690): expected_datao=0, payload_size=4096 00:23:45.490 [2024-11-19 10:51:33.006689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.490 [2024-11-19 10:51:33.006707] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.490 [2024-11-19 10:51:33.006716] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.490 [2024-11-19 10:51:33.047388] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.490 [2024-11-19 10:51:33.047407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.490 [2024-11-19 10:51:33.047414] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.490 [2024-11-19 10:51:33.047421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8700) on tqpair=0x786690 00:23:45.490 [2024-11-19 10:51:33.047445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:45.490 [2024-11-19 10:51:33.047465] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:45.490 [2024-11-19 10:51:33.047480] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.490 [2024-11-19 10:51:33.047488] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x786690) 00:23:45.490 [2024-11-19 10:51:33.047499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.490 [2024-11-19 10:51:33.047523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8700, cid 4, qid 0 00:23:45.490 [2024-11-19 10:51:33.047617] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.490 [2024-11-19 10:51:33.047632] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.490 [2024-11-19 10:51:33.047639] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.490 [2024-11-19 10:51:33.047645] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x786690): datao=0, datal=4096, cccid=4 00:23:45.490 [2024-11-19 10:51:33.047653] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7e8700) on tqpair(0x786690): expected_datao=0, payload_size=4096 00:23:45.490 [2024-11-19 10:51:33.047660] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.490 [2024-11-19 10:51:33.047677] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.490 [2024-11-19 10:51:33.047686] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.490 [2024-11-19 10:51:33.090321] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.490 [2024-11-19 10:51:33.090340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.490 [2024-11-19 10:51:33.090348] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.490 [2024-11-19 10:51:33.090354] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8700) on tqpair=0x786690 00:23:45.490 [2024-11-19 10:51:33.090369] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:45.490 [2024-11-19 10:51:33.090385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:45.490 [2024-11-19 10:51:33.090402] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:45.490 [2024-11-19 10:51:33.090414] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:45.490 [2024-11-19 10:51:33.090422] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:45.490 [2024-11-19 10:51:33.090431] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:45.490 [2024-11-19 10:51:33.090440] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:45.490 [2024-11-19 10:51:33.090448] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:45.490 [2024-11-19 10:51:33.090461] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:45.490 [2024-11-19 10:51:33.090480] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.490 [2024-11-19 10:51:33.090489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x786690) 00:23:45.490 [2024-11-19 10:51:33.090501] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.490 [2024-11-19 10:51:33.090512] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.490 [2024-11-19 10:51:33.090520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.490 [2024-11-19 10:51:33.090526] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x786690) 00:23:45.490 [2024-11-19 10:51:33.090535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.490 [2024-11-19 10:51:33.090563] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8700, cid 4, qid 0 00:23:45.490 [2024-11-19 10:51:33.090576] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8880, cid 5, qid 0 00:23:45.490 [2024-11-19 10:51:33.090683] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.490 [2024-11-19 10:51:33.090696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.490 [2024-11-19 10:51:33.090703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.490 [2024-11-19 10:51:33.090709] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8700) on tqpair=0x786690 00:23:45.490 [2024-11-19 10:51:33.090719] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.490 [2024-11-19 10:51:33.090728] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.491 [2024-11-19 10:51:33.090734] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.090741] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8880) on tqpair=0x786690 00:23:45.491 [2024-11-19 10:51:33.090756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.090766] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x786690) 00:23:45.491 [2024-11-19 10:51:33.090776] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.491 [2024-11-19 10:51:33.090798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8880, cid 5, qid 0 00:23:45.491 [2024-11-19 10:51:33.090888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.491 [2024-11-19 10:51:33.090903] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.491 [2024-11-19 10:51:33.090910] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.090917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8880) on tqpair=0x786690 00:23:45.491 [2024-11-19 10:51:33.090933] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.090942] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x786690) 00:23:45.491 [2024-11-19 10:51:33.090952] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.491 [2024-11-19 10:51:33.090973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8880, cid 5, qid 0 00:23:45.491 [2024-11-19 10:51:33.091066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.491 [2024-11-19 10:51:33.091080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.491 [2024-11-19 10:51:33.091087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8880) on tqpair=0x786690 00:23:45.491 [2024-11-19 10:51:33.091110] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091123] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x786690) 00:23:45.491 [2024-11-19 10:51:33.091135] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.491 [2024-11-19 10:51:33.091157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8880, cid 5, qid 0 00:23:45.491 [2024-11-19 10:51:33.091242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.491 [2024-11-19 10:51:33.091256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.491 [2024-11-19 10:51:33.091263] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091270] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8880) on tqpair=0x786690 00:23:45.491 [2024-11-19 10:51:33.091308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x786690) 00:23:45.491 [2024-11-19 10:51:33.091332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.491 [2024-11-19 10:51:33.091346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091353] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x786690) 00:23:45.491 [2024-11-19 10:51:33.091363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.491 [2024-11-19 10:51:33.091376] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x786690) 00:23:45.491 [2024-11-19 10:51:33.091393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.491 [2024-11-19 10:51:33.091405] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x786690) 00:23:45.491 [2024-11-19 10:51:33.091423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.491 [2024-11-19 10:51:33.091446] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8880, cid 5, qid 0 00:23:45.491 [2024-11-19 10:51:33.091457] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8700, cid 4, qid 0 00:23:45.491 [2024-11-19 10:51:33.091465] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8a00, cid 6, qid 0 00:23:45.491 [2024-11-19 10:51:33.091472] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8b80, cid 7, qid 0 00:23:45.491 [2024-11-19 10:51:33.091652] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.491 [2024-11-19 10:51:33.091667] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.491 [2024-11-19 10:51:33.091674] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091680] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x786690): datao=0, datal=8192, cccid=5 00:23:45.491 [2024-11-19 10:51:33.091688] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7e8880) on tqpair(0x786690): expected_datao=0, payload_size=8192 00:23:45.491 [2024-11-19 10:51:33.091696] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091714] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091723] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.491 [2024-11-19 10:51:33.091746] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.491 [2024-11-19 10:51:33.091752] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091774] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x786690): datao=0, datal=512, cccid=4 00:23:45.491 [2024-11-19 10:51:33.091782] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7e8700) on tqpair(0x786690): expected_datao=0, payload_size=512 00:23:45.491 [2024-11-19 10:51:33.091789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091798] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091805] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.491 [2024-11-19 10:51:33.091822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.491 [2024-11-19 10:51:33.091837] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091843] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x786690): datao=0, datal=512, cccid=6 00:23:45.491 [2024-11-19 10:51:33.091850] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7e8a00) on tqpair(0x786690): expected_datao=0, payload_size=512 00:23:45.491 [2024-11-19 10:51:33.091857] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091866] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091873] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.491 [2024-11-19 10:51:33.091890] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.491 [2024-11-19 10:51:33.091896] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091902] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x786690): datao=0, datal=4096, cccid=7 00:23:45.491 [2024-11-19 10:51:33.091909] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7e8b80) on tqpair(0x786690): expected_datao=0, payload_size=4096 00:23:45.491 [2024-11-19 10:51:33.091916] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091926] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091933] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.491 [2024-11-19 10:51:33.091953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.491 [2024-11-19 10:51:33.091960] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.091966] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8880) on tqpair=0x786690 00:23:45.491 [2024-11-19 10:51:33.091988] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.491 [2024-11-19 10:51:33.091999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.491 [2024-11-19 10:51:33.092006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.092012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8700) on tqpair=0x786690 00:23:45.491 [2024-11-19 10:51:33.092028] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.491 [2024-11-19 10:51:33.092038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.491 [2024-11-19 10:51:33.092045] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.491 [2024-11-19 10:51:33.092051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8a00) on tqpair=0x786690 00:23:45.492 [2024-11-19 10:51:33.092062] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.492 [2024-11-19 10:51:33.092071] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.492 [2024-11-19 10:51:33.092078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.492 [2024-11-19 10:51:33.092084] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8b80) on tqpair=0x786690 00:23:45.492 ===================================================== 00:23:45.492 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:45.492 ===================================================== 00:23:45.492 Controller Capabilities/Features 00:23:45.492 ================================ 00:23:45.492 Vendor ID: 8086 00:23:45.492 Subsystem Vendor ID: 8086 00:23:45.492 Serial Number: SPDK00000000000001 00:23:45.492 Model Number: SPDK bdev Controller 00:23:45.492 Firmware Version: 25.01 00:23:45.492 Recommended Arb Burst: 6 00:23:45.492 IEEE OUI Identifier: e4 d2 5c 00:23:45.492 Multi-path I/O 00:23:45.492 May have multiple subsystem ports: Yes 00:23:45.492 May have multiple controllers: Yes 00:23:45.492 Associated with SR-IOV VF: No 00:23:45.492 Max Data Transfer Size: 131072 00:23:45.492 Max Number of Namespaces: 32 00:23:45.492 Max Number of I/O Queues: 127 00:23:45.492 NVMe Specification Version (VS): 1.3 00:23:45.492 NVMe Specification Version (Identify): 1.3 00:23:45.492 Maximum Queue Entries: 128 00:23:45.492 Contiguous Queues Required: Yes 00:23:45.492 Arbitration Mechanisms Supported 00:23:45.492 Weighted Round Robin: Not Supported 00:23:45.492 Vendor Specific: Not Supported 00:23:45.492 Reset Timeout: 15000 ms 00:23:45.492 Doorbell Stride: 4 bytes 00:23:45.492 NVM Subsystem Reset: Not Supported 00:23:45.492 Command Sets Supported 00:23:45.492 NVM Command Set: Supported 00:23:45.492 Boot Partition: Not Supported 00:23:45.492 Memory Page Size Minimum: 4096 bytes 00:23:45.492 Memory Page Size Maximum: 4096 bytes 00:23:45.492 Persistent Memory Region: Not Supported 00:23:45.492 Optional Asynchronous Events Supported 00:23:45.492 Namespace Attribute Notices: Supported 00:23:45.492 Firmware Activation Notices: Not Supported 00:23:45.492 ANA Change Notices: Not Supported 00:23:45.492 PLE Aggregate Log Change Notices: Not Supported 00:23:45.492 LBA Status Info Alert Notices: Not Supported 00:23:45.492 EGE Aggregate Log Change Notices: Not Supported 00:23:45.492 Normal NVM Subsystem Shutdown event: Not Supported 00:23:45.492 Zone Descriptor Change Notices: Not Supported 00:23:45.492 Discovery Log Change Notices: Not Supported 00:23:45.492 Controller Attributes 00:23:45.492 128-bit Host Identifier: Supported 00:23:45.492 Non-Operational Permissive Mode: Not Supported 00:23:45.492 NVM Sets: Not Supported 00:23:45.492 Read Recovery Levels: Not Supported 00:23:45.492 Endurance Groups: Not Supported 00:23:45.492 Predictable Latency Mode: Not Supported 00:23:45.492 Traffic Based Keep ALive: Not Supported 00:23:45.492 Namespace Granularity: Not Supported 00:23:45.492 SQ Associations: Not Supported 00:23:45.492 UUID List: Not Supported 00:23:45.492 Multi-Domain Subsystem: Not Supported 00:23:45.492 Fixed Capacity Management: Not Supported 00:23:45.492 Variable Capacity Management: Not Supported 00:23:45.492 Delete Endurance Group: Not Supported 00:23:45.492 Delete NVM Set: Not Supported 00:23:45.492 Extended LBA Formats Supported: Not Supported 00:23:45.492 Flexible Data Placement Supported: Not Supported 00:23:45.492 00:23:45.492 Controller Memory Buffer Support 00:23:45.492 ================================ 00:23:45.492 Supported: No 00:23:45.492 00:23:45.492 Persistent Memory Region Support 00:23:45.492 ================================ 00:23:45.492 Supported: No 00:23:45.492 00:23:45.492 Admin Command Set Attributes 00:23:45.492 ============================ 00:23:45.492 Security Send/Receive: Not Supported 00:23:45.492 Format NVM: Not Supported 00:23:45.492 Firmware Activate/Download: Not Supported 00:23:45.492 Namespace Management: Not Supported 00:23:45.492 Device Self-Test: Not Supported 00:23:45.492 Directives: Not Supported 00:23:45.492 NVMe-MI: Not Supported 00:23:45.492 Virtualization Management: Not Supported 00:23:45.492 Doorbell Buffer Config: Not Supported 00:23:45.492 Get LBA Status Capability: Not Supported 00:23:45.492 Command & Feature Lockdown Capability: Not Supported 00:23:45.492 Abort Command Limit: 4 00:23:45.492 Async Event Request Limit: 4 00:23:45.492 Number of Firmware Slots: N/A 00:23:45.492 Firmware Slot 1 Read-Only: N/A 00:23:45.492 Firmware Activation Without Reset: N/A 00:23:45.492 Multiple Update Detection Support: N/A 00:23:45.492 Firmware Update Granularity: No Information Provided 00:23:45.492 Per-Namespace SMART Log: No 00:23:45.492 Asymmetric Namespace Access Log Page: Not Supported 00:23:45.492 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:45.492 Command Effects Log Page: Supported 00:23:45.492 Get Log Page Extended Data: Supported 00:23:45.492 Telemetry Log Pages: Not Supported 00:23:45.492 Persistent Event Log Pages: Not Supported 00:23:45.492 Supported Log Pages Log Page: May Support 00:23:45.492 Commands Supported & Effects Log Page: Not Supported 00:23:45.492 Feature Identifiers & Effects Log Page:May Support 00:23:45.492 NVMe-MI Commands & Effects Log Page: May Support 00:23:45.492 Data Area 4 for Telemetry Log: Not Supported 00:23:45.492 Error Log Page Entries Supported: 128 00:23:45.492 Keep Alive: Supported 00:23:45.492 Keep Alive Granularity: 10000 ms 00:23:45.492 00:23:45.492 NVM Command Set Attributes 00:23:45.492 ========================== 00:23:45.492 Submission Queue Entry Size 00:23:45.492 Max: 64 00:23:45.492 Min: 64 00:23:45.492 Completion Queue Entry Size 00:23:45.492 Max: 16 00:23:45.492 Min: 16 00:23:45.492 Number of Namespaces: 32 00:23:45.492 Compare Command: Supported 00:23:45.492 Write Uncorrectable Command: Not Supported 00:23:45.492 Dataset Management Command: Supported 00:23:45.492 Write Zeroes Command: Supported 00:23:45.492 Set Features Save Field: Not Supported 00:23:45.492 Reservations: Supported 00:23:45.492 Timestamp: Not Supported 00:23:45.492 Copy: Supported 00:23:45.492 Volatile Write Cache: Present 00:23:45.492 Atomic Write Unit (Normal): 1 00:23:45.492 Atomic Write Unit (PFail): 1 00:23:45.492 Atomic Compare & Write Unit: 1 00:23:45.492 Fused Compare & Write: Supported 00:23:45.492 Scatter-Gather List 00:23:45.492 SGL Command Set: Supported 00:23:45.492 SGL Keyed: Supported 00:23:45.492 SGL Bit Bucket Descriptor: Not Supported 00:23:45.492 SGL Metadata Pointer: Not Supported 00:23:45.492 Oversized SGL: Not Supported 00:23:45.492 SGL Metadata Address: Not Supported 00:23:45.492 SGL Offset: Supported 00:23:45.492 Transport SGL Data Block: Not Supported 00:23:45.492 Replay Protected Memory Block: Not Supported 00:23:45.492 00:23:45.492 Firmware Slot Information 00:23:45.492 ========================= 00:23:45.492 Active slot: 1 00:23:45.492 Slot 1 Firmware Revision: 25.01 00:23:45.492 00:23:45.492 00:23:45.492 Commands Supported and Effects 00:23:45.492 ============================== 00:23:45.492 Admin Commands 00:23:45.492 -------------- 00:23:45.492 Get Log Page (02h): Supported 00:23:45.492 Identify (06h): Supported 00:23:45.492 Abort (08h): Supported 00:23:45.492 Set Features (09h): Supported 00:23:45.492 Get Features (0Ah): Supported 00:23:45.492 Asynchronous Event Request (0Ch): Supported 00:23:45.492 Keep Alive (18h): Supported 00:23:45.492 I/O Commands 00:23:45.492 ------------ 00:23:45.492 Flush (00h): Supported LBA-Change 00:23:45.492 Write (01h): Supported LBA-Change 00:23:45.492 Read (02h): Supported 00:23:45.492 Compare (05h): Supported 00:23:45.493 Write Zeroes (08h): Supported LBA-Change 00:23:45.493 Dataset Management (09h): Supported LBA-Change 00:23:45.493 Copy (19h): Supported LBA-Change 00:23:45.493 00:23:45.493 Error Log 00:23:45.493 ========= 00:23:45.493 00:23:45.493 Arbitration 00:23:45.493 =========== 00:23:45.493 Arbitration Burst: 1 00:23:45.493 00:23:45.493 Power Management 00:23:45.493 ================ 00:23:45.493 Number of Power States: 1 00:23:45.493 Current Power State: Power State #0 00:23:45.493 Power State #0: 00:23:45.493 Max Power: 0.00 W 00:23:45.493 Non-Operational State: Operational 00:23:45.493 Entry Latency: Not Reported 00:23:45.493 Exit Latency: Not Reported 00:23:45.493 Relative Read Throughput: 0 00:23:45.493 Relative Read Latency: 0 00:23:45.493 Relative Write Throughput: 0 00:23:45.493 Relative Write Latency: 0 00:23:45.493 Idle Power: Not Reported 00:23:45.493 Active Power: Not Reported 00:23:45.493 Non-Operational Permissive Mode: Not Supported 00:23:45.493 00:23:45.493 Health Information 00:23:45.493 ================== 00:23:45.493 Critical Warnings: 00:23:45.493 Available Spare Space: OK 00:23:45.493 Temperature: OK 00:23:45.493 Device Reliability: OK 00:23:45.493 Read Only: No 00:23:45.493 Volatile Memory Backup: OK 00:23:45.493 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:45.493 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:45.493 Available Spare: 0% 00:23:45.493 Available Spare Threshold: 0% 00:23:45.493 Life Percentage Used:[2024-11-19 10:51:33.092196] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.493 [2024-11-19 10:51:33.092209] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x786690) 00:23:45.493 [2024-11-19 10:51:33.092223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.493 [2024-11-19 10:51:33.092261] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8b80, cid 7, qid 0 00:23:45.493 [2024-11-19 10:51:33.092389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.493 [2024-11-19 10:51:33.092405] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.493 [2024-11-19 10:51:33.092412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.493 [2024-11-19 10:51:33.092419] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8b80) on tqpair=0x786690 00:23:45.493 [2024-11-19 10:51:33.092461] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:45.493 [2024-11-19 10:51:33.092480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8100) on tqpair=0x786690 00:23:45.493 [2024-11-19 10:51:33.092491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.493 [2024-11-19 10:51:33.092500] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8280) on tqpair=0x786690 00:23:45.493 [2024-11-19 10:51:33.092507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.493 [2024-11-19 10:51:33.092515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8400) on tqpair=0x786690 00:23:45.493 [2024-11-19 10:51:33.092523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.493 [2024-11-19 10:51:33.092530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8580) on tqpair=0x786690 00:23:45.493 [2024-11-19 10:51:33.092538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.493 [2024-11-19 10:51:33.092550] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.493 [2024-11-19 10:51:33.092558] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.493 [2024-11-19 10:51:33.092564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x786690) 00:23:45.493 [2024-11-19 10:51:33.092575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.493 [2024-11-19 10:51:33.092606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8580, cid 3, qid 0 00:23:45.493 [2024-11-19 10:51:33.092731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.493 [2024-11-19 10:51:33.092743] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.493 [2024-11-19 10:51:33.092750] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.493 [2024-11-19 10:51:33.092756] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8580) on tqpair=0x786690 00:23:45.493 [2024-11-19 10:51:33.092767] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.493 [2024-11-19 10:51:33.092774] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.493 [2024-11-19 10:51:33.092781] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x786690) 00:23:45.493 [2024-11-19 10:51:33.092791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.493 [2024-11-19 10:51:33.092817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8580, cid 3, qid 0 00:23:45.493 [2024-11-19 10:51:33.092920] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.493 [2024-11-19 10:51:33.092934] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.493 [2024-11-19 10:51:33.092940] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.493 [2024-11-19 10:51:33.092947] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8580) on tqpair=0x786690 00:23:45.493 [2024-11-19 10:51:33.092954] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:45.493 [2024-11-19 10:51:33.092966] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:45.493 [2024-11-19 10:51:33.092983] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.493 [2024-11-19 10:51:33.092992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.493 [2024-11-19 10:51:33.092999] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x786690) 00:23:45.493 [2024-11-19 10:51:33.093009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.493 [2024-11-19 10:51:33.093030] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8580, cid 3, qid 0 00:23:45.493 [2024-11-19 10:51:33.093159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.493 [2024-11-19 10:51:33.093173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.493 [2024-11-19 10:51:33.093179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.493 [2024-11-19 10:51:33.093186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8580) on tqpair=0x786690 00:23:45.493 [2024-11-19 10:51:33.093202] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.493 [2024-11-19 10:51:33.093212] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.493 [2024-11-19 10:51:33.093218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x786690) 00:23:45.493 [2024-11-19 10:51:33.093229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.493 [2024-11-19 10:51:33.093251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8580, cid 3, qid 0 00:23:45.493 [2024-11-19 10:51:33.093361] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.493 [2024-11-19 10:51:33.093376] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.494 [2024-11-19 10:51:33.093383] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.494 [2024-11-19 10:51:33.093389] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8580) on tqpair=0x786690 00:23:45.494 [2024-11-19 10:51:33.093406] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.494 [2024-11-19 10:51:33.093415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.494 [2024-11-19 10:51:33.093421] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x786690) 00:23:45.494 [2024-11-19 10:51:33.093431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.494 [2024-11-19 10:51:33.093453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8580, cid 3, qid 0 00:23:45.494 [2024-11-19 10:51:33.093563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.494 [2024-11-19 10:51:33.093577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.494 [2024-11-19 10:51:33.093584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.494 [2024-11-19 10:51:33.093591] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8580) on tqpair=0x786690 00:23:45.494 [2024-11-19 10:51:33.093606] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.494 [2024-11-19 10:51:33.093615] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.494 [2024-11-19 10:51:33.093622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x786690) 00:23:45.494 [2024-11-19 10:51:33.093632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.494 [2024-11-19 10:51:33.093653] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8580, cid 3, qid 0 00:23:45.494 [2024-11-19 10:51:33.093738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.494 [2024-11-19 10:51:33.093752] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.494 [2024-11-19 10:51:33.093759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.494 [2024-11-19 10:51:33.093769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8580) on tqpair=0x786690 00:23:45.494 [2024-11-19 10:51:33.093786] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.494 [2024-11-19 10:51:33.093796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.494 [2024-11-19 10:51:33.093802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x786690) 00:23:45.494 [2024-11-19 10:51:33.093813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.494 [2024-11-19 10:51:33.093834] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8580, cid 3, qid 0 00:23:45.494 [2024-11-19 10:51:33.093964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.494 [2024-11-19 10:51:33.093978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.494 [2024-11-19 10:51:33.093985] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.494 [2024-11-19 10:51:33.093991] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8580) on tqpair=0x786690 00:23:45.494 [2024-11-19 10:51:33.094007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.494 [2024-11-19 10:51:33.094016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.494 [2024-11-19 10:51:33.094023] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x786690) 00:23:45.494 [2024-11-19 10:51:33.094033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.494 [2024-11-19 10:51:33.094054] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8580, cid 3, qid 0 00:23:45.494 [2024-11-19 10:51:33.094182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.494 [2024-11-19 10:51:33.094197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.494 [2024-11-19 10:51:33.094203] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.494 [2024-11-19 10:51:33.094210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8580) on tqpair=0x786690 00:23:45.494 [2024-11-19 10:51:33.094227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.494 [2024-11-19 10:51:33.094237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.494 [2024-11-19 10:51:33.094243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x786690) 00:23:45.494 [2024-11-19 10:51:33.094254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.494 [2024-11-19 10:51:33.094275] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8580, cid 3, qid 0 00:23:45.494 [2024-11-19 10:51:33.098319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.494 [2024-11-19 10:51:33.098336] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.494 [2024-11-19 10:51:33.098358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.494 [2024-11-19 10:51:33.098366] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8580) on tqpair=0x786690 00:23:45.494 [2024-11-19 10:51:33.098384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.494 [2024-11-19 10:51:33.098394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.494 [2024-11-19 10:51:33.098400] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x786690) 00:23:45.494 [2024-11-19 10:51:33.098411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.494 [2024-11-19 10:51:33.098433] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e8580, cid 3, qid 0 00:23:45.494 [2024-11-19 10:51:33.098528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.494 [2024-11-19 10:51:33.098543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.494 [2024-11-19 10:51:33.098549] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.494 [2024-11-19 10:51:33.098556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e8580) on tqpair=0x786690 00:23:45.494 [2024-11-19 10:51:33.098576] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:23:45.752 0% 00:23:45.752 Data Units Read: 0 00:23:45.752 Data Units Written: 0 00:23:45.752 Host Read Commands: 0 00:23:45.752 Host Write Commands: 0 00:23:45.752 Controller Busy Time: 0 minutes 00:23:45.752 Power Cycles: 0 00:23:45.752 Power On Hours: 0 hours 00:23:45.752 Unsafe Shutdowns: 0 00:23:45.752 Unrecoverable Media Errors: 0 00:23:45.752 Lifetime Error Log Entries: 0 00:23:45.752 Warning Temperature Time: 0 minutes 00:23:45.752 Critical Temperature Time: 0 minutes 00:23:45.752 00:23:45.752 Number of Queues 00:23:45.752 ================ 00:23:45.752 Number of I/O Submission Queues: 127 00:23:45.752 Number of I/O Completion Queues: 127 00:23:45.752 00:23:45.752 Active Namespaces 00:23:45.752 ================= 00:23:45.752 Namespace ID:1 00:23:45.752 Error Recovery Timeout: Unlimited 00:23:45.752 Command Set Identifier: NVM (00h) 00:23:45.752 Deallocate: Supported 00:23:45.752 Deallocated/Unwritten Error: Not Supported 00:23:45.752 Deallocated Read Value: Unknown 00:23:45.752 Deallocate in Write Zeroes: Not Supported 00:23:45.752 Deallocated Guard Field: 0xFFFF 00:23:45.752 Flush: Supported 00:23:45.752 Reservation: Supported 00:23:45.752 Namespace Sharing Capabilities: Multiple Controllers 00:23:45.752 Size (in LBAs): 131072 (0GiB) 00:23:45.752 Capacity (in LBAs): 131072 (0GiB) 00:23:45.752 Utilization (in LBAs): 131072 (0GiB) 00:23:45.752 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:45.752 EUI64: ABCDEF0123456789 00:23:45.752 UUID: 4729f783-805e-4d51-b6b9-d1b3868946c3 00:23:45.752 Thin Provisioning: Not Supported 00:23:45.752 Per-NS Atomic Units: Yes 00:23:45.752 Atomic Boundary Size (Normal): 0 00:23:45.752 Atomic Boundary Size (PFail): 0 00:23:45.752 Atomic Boundary Offset: 0 00:23:45.752 Maximum Single Source Range Length: 65535 00:23:45.752 Maximum Copy Length: 65535 00:23:45.752 Maximum Source Range Count: 1 00:23:45.752 NGUID/EUI64 Never Reused: No 00:23:45.752 Namespace Write Protected: No 00:23:45.752 Number of LBA Formats: 1 00:23:45.753 Current LBA Format: LBA Format #00 00:23:45.753 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:45.753 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:45.753 rmmod nvme_tcp 00:23:45.753 rmmod nvme_fabrics 00:23:45.753 rmmod nvme_keyring 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1405121 ']' 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1405121 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1405121 ']' 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1405121 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1405121 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1405121' 00:23:45.753 killing process with pid 1405121 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1405121 00:23:45.753 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1405121 00:23:46.012 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:46.012 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:46.012 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:46.012 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:46.012 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:46.012 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:46.012 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:46.012 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:46.012 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:46.012 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.012 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.012 10:51:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.975 10:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:47.975 00:23:47.975 real 0m5.764s 00:23:47.975 user 0m5.269s 00:23:47.975 sys 0m1.983s 00:23:47.975 10:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:47.975 10:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.975 ************************************ 00:23:47.975 END TEST nvmf_identify 00:23:47.975 ************************************ 00:23:47.975 10:51:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:47.975 10:51:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:47.975 10:51:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:47.975 10:51:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.975 ************************************ 00:23:47.975 START TEST nvmf_perf 00:23:47.975 ************************************ 00:23:47.975 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:48.235 * Looking for test storage... 00:23:48.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:48.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.235 --rc genhtml_branch_coverage=1 00:23:48.235 --rc genhtml_function_coverage=1 00:23:48.235 --rc genhtml_legend=1 00:23:48.235 --rc geninfo_all_blocks=1 00:23:48.235 --rc geninfo_unexecuted_blocks=1 00:23:48.235 00:23:48.235 ' 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:48.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.235 --rc genhtml_branch_coverage=1 00:23:48.235 --rc genhtml_function_coverage=1 00:23:48.235 --rc genhtml_legend=1 00:23:48.235 --rc geninfo_all_blocks=1 00:23:48.235 --rc geninfo_unexecuted_blocks=1 00:23:48.235 00:23:48.235 ' 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:48.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.235 --rc genhtml_branch_coverage=1 00:23:48.235 --rc genhtml_function_coverage=1 00:23:48.235 --rc genhtml_legend=1 00:23:48.235 --rc geninfo_all_blocks=1 00:23:48.235 --rc geninfo_unexecuted_blocks=1 00:23:48.235 00:23:48.235 ' 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:48.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.235 --rc genhtml_branch_coverage=1 00:23:48.235 --rc genhtml_function_coverage=1 00:23:48.235 --rc genhtml_legend=1 00:23:48.235 --rc geninfo_all_blocks=1 00:23:48.235 --rc geninfo_unexecuted_blocks=1 00:23:48.235 00:23:48.235 ' 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.235 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:48.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:48.236 10:51:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:50.768 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:50.769 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:50.769 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:50.769 Found net devices under 0000:09:00.0: cvl_0_0 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:50.769 Found net devices under 0000:09:00.1: cvl_0_1 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:50.769 10:51:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:50.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:50.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:23:50.769 00:23:50.769 --- 10.0.0.2 ping statistics --- 00:23:50.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.769 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:50.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:50.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:23:50.769 00:23:50.769 --- 10.0.0.1 ping statistics --- 00:23:50.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.769 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:50.769 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1407218 00:23:50.770 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:50.770 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1407218 00:23:50.770 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1407218 ']' 00:23:50.770 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.770 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.770 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.770 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.770 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:50.770 [2024-11-19 10:51:38.190802] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:23:50.770 [2024-11-19 10:51:38.190867] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.770 [2024-11-19 10:51:38.260484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:50.770 [2024-11-19 10:51:38.317274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.770 [2024-11-19 10:51:38.317347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.770 [2024-11-19 10:51:38.317368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.770 [2024-11-19 10:51:38.317394] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.770 [2024-11-19 10:51:38.317403] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.770 [2024-11-19 10:51:38.318928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.770 [2024-11-19 10:51:38.319035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.770 [2024-11-19 10:51:38.319104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:50.770 [2024-11-19 10:51:38.319107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.028 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.028 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:23:51.028 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:51.028 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:51.028 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:51.028 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.028 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:51.028 10:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:54.306 10:51:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:54.306 10:51:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:54.306 10:51:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:23:54.306 10:51:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:54.564 10:51:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:54.564 10:51:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:23:54.564 10:51:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:54.564 10:51:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:54.564 10:51:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:54.822 [2024-11-19 10:51:42.411476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.822 10:51:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:55.079 10:51:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:55.079 10:51:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:55.337 10:51:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:55.337 10:51:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:55.904 10:51:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:55.904 [2024-11-19 10:51:43.503575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.904 10:51:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:56.162 10:51:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:23:56.162 10:51:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:23:56.162 10:51:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:56.162 10:51:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:23:57.535 Initializing NVMe Controllers 00:23:57.535 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:23:57.535 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:23:57.535 Initialization complete. Launching workers. 00:23:57.535 ======================================================== 00:23:57.535 Latency(us) 00:23:57.535 Device Information : IOPS MiB/s Average min max 00:23:57.535 PCIE (0000:0b:00.0) NSID 1 from core 0: 85291.74 333.17 374.60 28.16 7214.10 00:23:57.535 ======================================================== 00:23:57.535 Total : 85291.74 333.17 374.60 28.16 7214.10 00:23:57.535 00:23:57.535 10:51:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:58.907 Initializing NVMe Controllers 00:23:58.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:58.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:58.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:58.907 Initialization complete. Launching workers. 00:23:58.907 ======================================================== 00:23:58.907 Latency(us) 00:23:58.907 Device Information : IOPS MiB/s Average min max 00:23:58.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 74.00 0.29 13831.63 137.93 45804.10 00:23:58.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 18667.67 7938.24 47892.54 00:23:58.907 ======================================================== 00:23:58.907 Total : 130.00 0.51 15914.85 137.93 47892.54 00:23:58.907 00:23:59.165 10:51:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:00.538 Initializing NVMe Controllers 00:24:00.538 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:00.538 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:00.538 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:00.538 Initialization complete. Launching workers. 00:24:00.538 ======================================================== 00:24:00.538 Latency(us) 00:24:00.538 Device Information : IOPS MiB/s Average min max 00:24:00.538 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8454.08 33.02 3801.20 689.77 7751.35 00:24:00.538 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3824.58 14.94 8411.13 6399.33 15860.22 00:24:00.538 ======================================================== 00:24:00.538 Total : 12278.66 47.96 5237.11 689.77 15860.22 00:24:00.538 00:24:00.538 10:51:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:00.538 10:51:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:00.538 10:51:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:03.065 Initializing NVMe Controllers 00:24:03.065 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:03.065 Controller IO queue size 128, less than required. 00:24:03.065 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:03.065 Controller IO queue size 128, less than required. 00:24:03.065 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:03.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:03.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:03.065 Initialization complete. Launching workers. 00:24:03.065 ======================================================== 00:24:03.065 Latency(us) 00:24:03.065 Device Information : IOPS MiB/s Average min max 00:24:03.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1723.22 430.80 75978.02 48369.10 134263.65 00:24:03.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 581.90 145.48 236332.91 79234.31 377854.05 00:24:03.065 ======================================================== 00:24:03.065 Total : 2305.12 576.28 116457.98 48369.10 377854.05 00:24:03.065 00:24:03.065 10:51:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:03.323 No valid NVMe controllers or AIO or URING devices found 00:24:03.323 Initializing NVMe Controllers 00:24:03.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:03.323 Controller IO queue size 128, less than required. 00:24:03.323 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:03.323 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:03.323 Controller IO queue size 128, less than required. 00:24:03.323 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:03.323 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:03.323 WARNING: Some requested NVMe devices were skipped 00:24:03.580 10:51:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:06.108 Initializing NVMe Controllers 00:24:06.109 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:06.109 Controller IO queue size 128, less than required. 00:24:06.109 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:06.109 Controller IO queue size 128, less than required. 00:24:06.109 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:06.109 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:06.109 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:06.109 Initialization complete. Launching workers. 00:24:06.109 00:24:06.109 ==================== 00:24:06.109 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:06.109 TCP transport: 00:24:06.109 polls: 11680 00:24:06.109 idle_polls: 8623 00:24:06.109 sock_completions: 3057 00:24:06.109 nvme_completions: 5311 00:24:06.109 submitted_requests: 7966 00:24:06.109 queued_requests: 1 00:24:06.109 00:24:06.109 ==================== 00:24:06.109 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:06.109 TCP transport: 00:24:06.109 polls: 14592 00:24:06.109 idle_polls: 10817 00:24:06.109 sock_completions: 3775 00:24:06.109 nvme_completions: 6461 00:24:06.109 submitted_requests: 9700 00:24:06.109 queued_requests: 1 00:24:06.109 ======================================================== 00:24:06.109 Latency(us) 00:24:06.109 Device Information : IOPS MiB/s Average min max 00:24:06.109 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1327.36 331.84 99202.26 71829.67 165944.77 00:24:06.109 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1614.83 403.71 79533.27 46412.36 120219.90 00:24:06.109 ======================================================== 00:24:06.109 Total : 2942.19 735.55 88406.88 46412.36 165944.77 00:24:06.109 00:24:06.109 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:06.109 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:06.367 rmmod nvme_tcp 00:24:06.367 rmmod nvme_fabrics 00:24:06.367 rmmod nvme_keyring 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1407218 ']' 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1407218 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1407218 ']' 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1407218 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1407218 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1407218' 00:24:06.367 killing process with pid 1407218 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1407218 00:24:06.367 10:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1407218 00:24:08.266 10:51:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:08.266 10:51:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:08.266 10:51:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:08.266 10:51:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:08.266 10:51:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:08.266 10:51:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:08.266 10:51:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:08.266 10:51:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:08.266 10:51:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:08.266 10:51:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.266 10:51:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.266 10:51:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:10.171 00:24:10.171 real 0m21.929s 00:24:10.171 user 1m7.271s 00:24:10.171 sys 0m5.730s 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:10.171 ************************************ 00:24:10.171 END TEST nvmf_perf 00:24:10.171 ************************************ 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.171 ************************************ 00:24:10.171 START TEST nvmf_fio_host 00:24:10.171 ************************************ 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:10.171 * Looking for test storage... 00:24:10.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:10.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.171 --rc genhtml_branch_coverage=1 00:24:10.171 --rc genhtml_function_coverage=1 00:24:10.171 --rc genhtml_legend=1 00:24:10.171 --rc geninfo_all_blocks=1 00:24:10.171 --rc geninfo_unexecuted_blocks=1 00:24:10.171 00:24:10.171 ' 00:24:10.171 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:10.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.171 --rc genhtml_branch_coverage=1 00:24:10.172 --rc genhtml_function_coverage=1 00:24:10.172 --rc genhtml_legend=1 00:24:10.172 --rc geninfo_all_blocks=1 00:24:10.172 --rc geninfo_unexecuted_blocks=1 00:24:10.172 00:24:10.172 ' 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:10.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.172 --rc genhtml_branch_coverage=1 00:24:10.172 --rc genhtml_function_coverage=1 00:24:10.172 --rc genhtml_legend=1 00:24:10.172 --rc geninfo_all_blocks=1 00:24:10.172 --rc geninfo_unexecuted_blocks=1 00:24:10.172 00:24:10.172 ' 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:10.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.172 --rc genhtml_branch_coverage=1 00:24:10.172 --rc genhtml_function_coverage=1 00:24:10.172 --rc genhtml_legend=1 00:24:10.172 --rc geninfo_all_blocks=1 00:24:10.172 --rc geninfo_unexecuted_blocks=1 00:24:10.172 00:24:10.172 ' 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.172 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:10.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:10.173 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:10.173 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:10.173 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:10.173 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:10.173 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:10.173 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:10.173 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.173 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:10.173 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:10.173 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:10.173 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.173 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:10.173 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.173 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:10.173 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:10.173 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:10.173 10:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.706 10:51:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:12.706 10:51:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:12.706 10:51:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:12.706 10:51:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:12.706 10:51:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:12.706 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:12.706 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:12.706 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:12.706 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:12.706 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:12.706 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:12.706 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:12.706 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:12.706 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:12.706 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:12.706 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:12.706 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:12.707 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:12.707 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:12.707 Found net devices under 0000:09:00.0: cvl_0_0 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:12.707 Found net devices under 0000:09:00.1: cvl_0_1 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:12.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:12.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:24:12.707 00:24:12.707 --- 10.0.0.2 ping statistics --- 00:24:12.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.707 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:12.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:12.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:24:12.707 00:24:12.707 --- 10.0.0.1 ping statistics --- 00:24:12.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.707 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:12.707 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:12.708 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:12.708 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:12.708 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:12.708 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:12.708 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:12.708 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:12.708 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:12.708 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:12.708 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:12.708 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:12.708 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.708 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1411197 00:24:12.708 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:12.708 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:12.708 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1411197 00:24:12.708 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1411197 ']' 00:24:12.708 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.708 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.708 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.708 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.708 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.708 [2024-11-19 10:52:00.216296] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:24:12.708 [2024-11-19 10:52:00.216382] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.708 [2024-11-19 10:52:00.293876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:12.966 [2024-11-19 10:52:00.355976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.966 [2024-11-19 10:52:00.356021] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.966 [2024-11-19 10:52:00.356033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.966 [2024-11-19 10:52:00.356044] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.966 [2024-11-19 10:52:00.356054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.966 [2024-11-19 10:52:00.357732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.966 [2024-11-19 10:52:00.357776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.966 [2024-11-19 10:52:00.357832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:12.966 [2024-11-19 10:52:00.357835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.966 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:12.966 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:12.966 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:13.224 [2024-11-19 10:52:00.751011] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.224 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:13.224 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:13.224 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.224 10:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:13.789 Malloc1 00:24:13.789 10:52:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:14.046 10:52:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:14.304 10:52:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:14.562 [2024-11-19 10:52:01.981916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.562 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:14.820 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:14.820 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:14.820 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:14.820 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:14.820 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:14.820 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:14.820 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:14.820 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:14.820 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:14.820 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:14.820 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:14.820 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:14.820 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:14.820 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:14.820 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:14.820 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:14.820 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:14.820 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:14.820 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:14.820 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:14.820 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:14.820 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:14.820 10:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:15.078 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:15.078 fio-3.35 00:24:15.078 Starting 1 thread 00:24:17.604 00:24:17.604 test: (groupid=0, jobs=1): err= 0: pid=1411563: Tue Nov 19 10:52:04 2024 00:24:17.604 read: IOPS=8908, BW=34.8MiB/s (36.5MB/s)(69.8MiB/2006msec) 00:24:17.604 slat (nsec): min=1951, max=115675, avg=2517.36, stdev=1389.43 00:24:17.604 clat (usec): min=2486, max=14251, avg=7849.28, stdev=649.98 00:24:17.604 lat (usec): min=2508, max=14254, avg=7851.79, stdev=649.88 00:24:17.604 clat percentiles (usec): 00:24:17.604 | 1.00th=[ 6456], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7308], 00:24:17.604 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8029], 00:24:17.604 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8848], 00:24:17.604 | 99.00th=[ 9241], 99.50th=[ 9503], 99.90th=[12518], 99.95th=[13042], 00:24:17.604 | 99.99th=[14222] 00:24:17.604 bw ( KiB/s): min=34816, max=36016, per=99.91%, avg=35602.00, stdev=541.11, samples=4 00:24:17.604 iops : min= 8704, max= 9004, avg=8900.50, stdev=135.28, samples=4 00:24:17.604 write: IOPS=8923, BW=34.9MiB/s (36.5MB/s)(69.9MiB/2006msec); 0 zone resets 00:24:17.604 slat (usec): min=2, max=122, avg= 2.62, stdev= 1.27 00:24:17.604 clat (usec): min=1074, max=12192, avg=6460.00, stdev=533.24 00:24:17.604 lat (usec): min=1087, max=12195, avg=6462.61, stdev=533.22 00:24:17.604 clat percentiles (usec): 00:24:17.604 | 1.00th=[ 5342], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6063], 00:24:17.604 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6587], 00:24:17.604 | 70.00th=[ 6718], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7242], 00:24:17.604 | 99.00th=[ 7570], 99.50th=[ 7767], 99.90th=[10552], 99.95th=[11076], 00:24:17.604 | 99.99th=[12125] 00:24:17.604 bw ( KiB/s): min=35400, max=35880, per=99.98%, avg=35686.00, stdev=210.49, samples=4 00:24:17.604 iops : min= 8850, max= 8970, avg=8921.50, stdev=52.62, samples=4 00:24:17.604 lat (msec) : 2=0.03%, 4=0.11%, 10=99.68%, 20=0.17% 00:24:17.604 cpu : usr=64.94%, sys=33.62%, ctx=83, majf=0, minf=32 00:24:17.604 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:17.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:17.604 issued rwts: total=17871,17900,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.604 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:17.604 00:24:17.604 Run status group 0 (all jobs): 00:24:17.604 READ: bw=34.8MiB/s (36.5MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), io=69.8MiB (73.2MB), run=2006-2006msec 00:24:17.604 WRITE: bw=34.9MiB/s (36.5MB/s), 34.9MiB/s-34.9MiB/s (36.5MB/s-36.5MB/s), io=69.9MiB (73.3MB), run=2006-2006msec 00:24:17.604 10:52:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:17.604 10:52:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:17.604 10:52:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:17.604 10:52:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:17.604 10:52:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:17.604 10:52:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:17.604 10:52:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:17.604 10:52:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:17.604 10:52:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:17.604 10:52:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:17.604 10:52:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:17.604 10:52:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:17.604 10:52:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:17.604 10:52:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:17.604 10:52:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:17.604 10:52:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:17.604 10:52:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:17.604 10:52:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:17.604 10:52:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:17.604 10:52:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:17.604 10:52:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:17.604 10:52:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:17.604 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:17.604 fio-3.35 00:24:17.604 Starting 1 thread 00:24:20.132 00:24:20.132 test: (groupid=0, jobs=1): err= 0: pid=1412011: Tue Nov 19 10:52:07 2024 00:24:20.132 read: IOPS=8099, BW=127MiB/s (133MB/s)(254MiB/2008msec) 00:24:20.132 slat (nsec): min=2952, max=96666, avg=3741.11, stdev=1705.67 00:24:20.132 clat (usec): min=2199, max=17278, avg=8962.34, stdev=2081.05 00:24:20.132 lat (usec): min=2202, max=17281, avg=8966.08, stdev=2081.09 00:24:20.132 clat percentiles (usec): 00:24:20.132 | 1.00th=[ 4686], 5.00th=[ 5604], 10.00th=[ 6325], 20.00th=[ 7111], 00:24:20.132 | 30.00th=[ 7767], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9503], 00:24:20.132 | 70.00th=[ 9896], 80.00th=[10683], 90.00th=[11600], 95.00th=[12518], 00:24:20.132 | 99.00th=[14091], 99.50th=[14484], 99.90th=[16909], 99.95th=[17171], 00:24:20.132 | 99.99th=[17171] 00:24:20.132 bw ( KiB/s): min=58656, max=77248, per=52.20%, avg=67640.00, stdev=9607.30, samples=4 00:24:20.132 iops : min= 3666, max= 4828, avg=4227.50, stdev=600.46, samples=4 00:24:20.132 write: IOPS=4766, BW=74.5MiB/s (78.1MB/s)(138MiB/1853msec); 0 zone resets 00:24:20.132 slat (usec): min=30, max=222, avg=34.16, stdev= 6.73 00:24:20.132 clat (usec): min=5982, max=19428, avg=11991.27, stdev=2243.93 00:24:20.132 lat (usec): min=6013, max=19459, avg=12025.44, stdev=2244.45 00:24:20.132 clat percentiles (usec): 00:24:20.132 | 1.00th=[ 7898], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10028], 00:24:20.132 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11731], 60.00th=[12387], 00:24:20.132 | 70.00th=[13173], 80.00th=[13960], 90.00th=[15008], 95.00th=[15926], 00:24:20.132 | 99.00th=[17695], 99.50th=[17957], 99.90th=[19006], 99.95th=[19006], 00:24:20.132 | 99.99th=[19530] 00:24:20.132 bw ( KiB/s): min=61056, max=81280, per=91.94%, avg=70112.00, stdev=10423.52, samples=4 00:24:20.132 iops : min= 3816, max= 5080, avg=4382.00, stdev=651.47, samples=4 00:24:20.132 lat (msec) : 4=0.22%, 10=53.13%, 20=46.65% 00:24:20.132 cpu : usr=76.05%, sys=22.71%, ctx=44, majf=0, minf=54 00:24:20.132 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:20.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:20.132 issued rwts: total=16263,8832,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.132 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:20.132 00:24:20.132 Run status group 0 (all jobs): 00:24:20.132 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=254MiB (266MB), run=2008-2008msec 00:24:20.132 WRITE: bw=74.5MiB/s (78.1MB/s), 74.5MiB/s-74.5MiB/s (78.1MB/s-78.1MB/s), io=138MiB (145MB), run=1853-1853msec 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:20.132 rmmod nvme_tcp 00:24:20.132 rmmod nvme_fabrics 00:24:20.132 rmmod nvme_keyring 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1411197 ']' 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1411197 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1411197 ']' 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1411197 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1411197 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1411197' 00:24:20.132 killing process with pid 1411197 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1411197 00:24:20.132 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1411197 00:24:20.391 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:20.391 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:20.391 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:20.391 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:20.391 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:20.391 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:20.391 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:20.391 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:20.391 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:20.391 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.391 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.391 10:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:22.936 00:24:22.936 real 0m12.453s 00:24:22.936 user 0m36.428s 00:24:22.936 sys 0m4.055s 00:24:22.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:22.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.936 ************************************ 00:24:22.936 END TEST nvmf_fio_host 00:24:22.936 ************************************ 00:24:22.936 10:52:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:22.936 10:52:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:22.936 10:52:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:22.936 10:52:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.936 ************************************ 00:24:22.936 START TEST nvmf_failover 00:24:22.936 ************************************ 00:24:22.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:22.936 * Looking for test storage... 00:24:22.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:22.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:22.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:24:22.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:22.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:22.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:22.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:22.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:22.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:22.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:22.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:22.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:22.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:22.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:22.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:22.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:22.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.937 --rc genhtml_branch_coverage=1 00:24:22.937 --rc genhtml_function_coverage=1 00:24:22.937 --rc genhtml_legend=1 00:24:22.937 --rc geninfo_all_blocks=1 00:24:22.937 --rc geninfo_unexecuted_blocks=1 00:24:22.937 00:24:22.937 ' 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:22.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.937 --rc genhtml_branch_coverage=1 00:24:22.937 --rc genhtml_function_coverage=1 00:24:22.937 --rc genhtml_legend=1 00:24:22.937 --rc geninfo_all_blocks=1 00:24:22.937 --rc geninfo_unexecuted_blocks=1 00:24:22.937 00:24:22.937 ' 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:22.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.937 --rc genhtml_branch_coverage=1 00:24:22.937 --rc genhtml_function_coverage=1 00:24:22.937 --rc genhtml_legend=1 00:24:22.937 --rc geninfo_all_blocks=1 00:24:22.937 --rc geninfo_unexecuted_blocks=1 00:24:22.937 00:24:22.937 ' 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:22.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.937 --rc genhtml_branch_coverage=1 00:24:22.937 --rc genhtml_function_coverage=1 00:24:22.937 --rc genhtml_legend=1 00:24:22.937 --rc geninfo_all_blocks=1 00:24:22.937 --rc geninfo_unexecuted_blocks=1 00:24:22.937 00:24:22.937 ' 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:22.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:22.937 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:22.938 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:22.938 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:22.938 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:22.938 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:22.938 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:22.938 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:22.938 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:22.938 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:22.938 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.938 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:22.938 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.938 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:22.938 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:22.938 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:22.938 10:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:24.840 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:24.840 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:24.840 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:24.840 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:24.840 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:24.840 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:24.841 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:24.841 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:24.841 Found net devices under 0000:09:00.0: cvl_0_0 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:24.841 Found net devices under 0000:09:00.1: cvl_0_1 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:24.841 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:25.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:25.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:24:25.100 00:24:25.100 --- 10.0.0.2 ping statistics --- 00:24:25.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.100 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:25.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:25.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:24:25.100 00:24:25.100 --- 10.0.0.1 ping statistics --- 00:24:25.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.100 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1414218 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1414218 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1414218 ']' 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:25.100 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:25.100 [2024-11-19 10:52:12.651484] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:24:25.100 [2024-11-19 10:52:12.651574] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.358 [2024-11-19 10:52:12.723164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:25.358 [2024-11-19 10:52:12.781979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:25.358 [2024-11-19 10:52:12.782021] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:25.358 [2024-11-19 10:52:12.782041] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:25.358 [2024-11-19 10:52:12.782051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:25.358 [2024-11-19 10:52:12.782061] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:25.358 [2024-11-19 10:52:12.783525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:25.358 [2024-11-19 10:52:12.783604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:25.358 [2024-11-19 10:52:12.783605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.358 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:25.358 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:25.358 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:25.358 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:25.358 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:25.358 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.358 10:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:25.616 [2024-11-19 10:52:13.188755] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.616 10:52:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:25.873 Malloc0 00:24:26.131 10:52:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:26.387 10:52:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:26.645 10:52:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:26.902 [2024-11-19 10:52:14.317850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.902 10:52:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:27.169 [2024-11-19 10:52:14.642908] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:27.169 10:52:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:27.490 [2024-11-19 10:52:14.968017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:27.490 10:52:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1414513 00:24:27.490 10:52:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:27.490 10:52:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:27.490 10:52:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1414513 /var/tmp/bdevperf.sock 00:24:27.490 10:52:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1414513 ']' 00:24:27.490 10:52:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:27.490 10:52:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.490 10:52:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:27.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:27.490 10:52:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.490 10:52:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:27.770 10:52:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.770 10:52:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:27.770 10:52:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:28.335 NVMe0n1 00:24:28.335 10:52:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:28.593 00:24:28.593 10:52:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1414652 00:24:28.593 10:52:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:28.593 10:52:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:29.526 10:52:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:30.092 [2024-11-19 10:52:17.433444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.433992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.434005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.092 [2024-11-19 10:52:17.434016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.093 [2024-11-19 10:52:17.434028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.093 [2024-11-19 10:52:17.434040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.093 [2024-11-19 10:52:17.434051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.093 [2024-11-19 10:52:17.434061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.093 [2024-11-19 10:52:17.434072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.093 [2024-11-19 10:52:17.434086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.093 [2024-11-19 10:52:17.434097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.093 [2024-11-19 10:52:17.434109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.093 [2024-11-19 10:52:17.434124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.093 [2024-11-19 10:52:17.434136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.093 [2024-11-19 10:52:17.434147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.093 [2024-11-19 10:52:17.434158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.093 [2024-11-19 10:52:17.434169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.093 [2024-11-19 10:52:17.434180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.093 [2024-11-19 10:52:17.434190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.093 [2024-11-19 10:52:17.434202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.093 [2024-11-19 10:52:17.434213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.093 [2024-11-19 10:52:17.434224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1340 is same with the state(6) to be set 00:24:30.093 10:52:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:33.372 10:52:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:33.372 00:24:33.372 10:52:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:33.630 [2024-11-19 10:52:21.166649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1e40 is same with the state(6) to be set 00:24:33.630 [2024-11-19 10:52:21.166721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1e40 is same with the state(6) to be set 00:24:33.630 [2024-11-19 10:52:21.166750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1e40 is same with the state(6) to be set 00:24:33.630 [2024-11-19 10:52:21.166762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1e40 is same with the state(6) to be set 00:24:33.630 [2024-11-19 10:52:21.166775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1e40 is same with the state(6) to be set 00:24:33.630 [2024-11-19 10:52:21.166787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1e40 is same with the state(6) to be set 00:24:33.630 [2024-11-19 10:52:21.166798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1e40 is same with the state(6) to be set 00:24:33.630 [2024-11-19 10:52:21.166810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1e40 is same with the state(6) to be set 00:24:33.630 [2024-11-19 10:52:21.166822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1e40 is same with the state(6) to be set 00:24:33.630 [2024-11-19 10:52:21.166833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1e40 is same with the state(6) to be set 00:24:33.630 [2024-11-19 10:52:21.166845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1e40 is same with the state(6) to be set 00:24:33.630 [2024-11-19 10:52:21.166857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1e40 is same with the state(6) to be set 00:24:33.630 [2024-11-19 10:52:21.166869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1e40 is same with the state(6) to be set 00:24:33.630 [2024-11-19 10:52:21.166893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1e40 is same with the state(6) to be set 00:24:33.630 [2024-11-19 10:52:21.166906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1e40 is same with the state(6) to be set 00:24:33.630 [2024-11-19 10:52:21.166919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1e40 is same with the state(6) to be set 00:24:33.630 10:52:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:36.911 10:52:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:36.911 [2024-11-19 10:52:24.455192] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.911 10:52:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:38.298 10:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:38.298 10:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1414652 00:24:44.863 { 00:24:44.863 "results": [ 00:24:44.863 { 00:24:44.863 "job": "NVMe0n1", 00:24:44.863 "core_mask": "0x1", 00:24:44.863 "workload": "verify", 00:24:44.863 "status": "finished", 00:24:44.863 "verify_range": { 00:24:44.863 "start": 0, 00:24:44.863 "length": 16384 00:24:44.863 }, 00:24:44.863 "queue_depth": 128, 00:24:44.863 "io_size": 4096, 00:24:44.863 "runtime": 15.014985, 00:24:44.863 "iops": 8525.682842840002, 00:24:44.863 "mibps": 33.30344860484376, 00:24:44.863 "io_failed": 6957, 00:24:44.863 "io_timeout": 0, 00:24:44.863 "avg_latency_us": 14212.223265548724, 00:24:44.863 "min_latency_us": 546.1333333333333, 00:24:44.863 "max_latency_us": 16699.543703703705 00:24:44.863 } 00:24:44.863 ], 00:24:44.863 "core_count": 1 00:24:44.863 } 00:24:44.863 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1414513 00:24:44.863 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1414513 ']' 00:24:44.863 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1414513 00:24:44.863 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:44.863 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:44.863 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1414513 00:24:44.863 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:44.863 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:44.863 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1414513' 00:24:44.863 killing process with pid 1414513 00:24:44.863 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1414513 00:24:44.863 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1414513 00:24:44.863 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:44.863 [2024-11-19 10:52:15.037431] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:24:44.863 [2024-11-19 10:52:15.037541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1414513 ] 00:24:44.863 [2024-11-19 10:52:15.107358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.863 [2024-11-19 10:52:15.167578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.863 Running I/O for 15 seconds... 00:24:44.863 8317.00 IOPS, 32.49 MiB/s [2024-11-19T09:52:32.486Z] [2024-11-19 10:52:17.435451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.863 [2024-11-19 10:52:17.435493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.863 [2024-11-19 10:52:17.435511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.863 [2024-11-19 10:52:17.435526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.863 [2024-11-19 10:52:17.435541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.863 [2024-11-19 10:52:17.435555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.863 [2024-11-19 10:52:17.435569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.863 [2024-11-19 10:52:17.435583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.863 [2024-11-19 10:52:17.435606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c2560 is same with the state(6) to be set 00:24:44.863 [2024-11-19 10:52:17.435677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.863 [2024-11-19 10:52:17.435699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.863 [2024-11-19 10:52:17.435723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.863 [2024-11-19 10:52:17.435739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.863 [2024-11-19 10:52:17.435756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.863 [2024-11-19 10:52:17.435771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.863 [2024-11-19 10:52:17.435803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.863 [2024-11-19 10:52:17.435817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.435833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.435847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.435879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.435894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.435910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.435934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.435951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.435966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.435982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.435996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.864 [2024-11-19 10:52:17.436884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.864 [2024-11-19 10:52:17.436898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.436914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.436928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.436943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.436957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.436973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.436987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.865 [2024-11-19 10:52:17.437718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.865 [2024-11-19 10:52:17.437910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.865 [2024-11-19 10:52:17.437924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.437947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.866 [2024-11-19 10:52:17.437962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.437978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.866 [2024-11-19 10:52:17.438012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.866 [2024-11-19 10:52:17.438045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.866 [2024-11-19 10:52:17.438076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.866 [2024-11-19 10:52:17.438106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.866 [2024-11-19 10:52:17.438136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.866 [2024-11-19 10:52:17.438167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.866 [2024-11-19 10:52:17.438203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.866 [2024-11-19 10:52:17.438234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.866 [2024-11-19 10:52:17.438264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.866 [2024-11-19 10:52:17.438294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.866 [2024-11-19 10:52:17.438335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.866 [2024-11-19 10:52:17.438370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.866 [2024-11-19 10:52:17.438401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.866 [2024-11-19 10:52:17.438437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.866 [2024-11-19 10:52:17.438467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.866 [2024-11-19 10:52:17.438505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.866 [2024-11-19 10:52:17.438536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.866 [2024-11-19 10:52:17.438566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.866 [2024-11-19 10:52:17.438596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.866 [2024-11-19 10:52:17.438642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.866 [2024-11-19 10:52:17.438671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.866 [2024-11-19 10:52:17.438702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.866 [2024-11-19 10:52:17.438736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.866 [2024-11-19 10:52:17.438767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.866 [2024-11-19 10:52:17.438796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.866 [2024-11-19 10:52:17.438829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.866 [2024-11-19 10:52:17.438860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.866 [2024-11-19 10:52:17.438889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.866 [2024-11-19 10:52:17.438919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.866 [2024-11-19 10:52:17.438948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.866 [2024-11-19 10:52:17.438978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.866 [2024-11-19 10:52:17.438994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.867 [2024-11-19 10:52:17.439870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.439901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.867 [2024-11-19 10:52:17.439917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.867 [2024-11-19 10:52:17.439929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79776 len:8 PRP1 0x0 PRP2 0x0 00:24:44.867 [2024-11-19 10:52:17.439943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.867 [2024-11-19 10:52:17.440017] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:44.867 [2024-11-19 10:52:17.440039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:44.867 [2024-11-19 10:52:17.443512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:44.867 [2024-11-19 10:52:17.443550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c2560 (9): Bad file descriptor 00:24:44.867 [2024-11-19 10:52:17.509291] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:44.867 8115.50 IOPS, 31.70 MiB/s [2024-11-19T09:52:32.491Z] 8286.67 IOPS, 32.37 MiB/s [2024-11-19T09:52:32.491Z] 8341.75 IOPS, 32.58 MiB/s [2024-11-19T09:52:32.491Z] [2024-11-19 10:52:21.168247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.868 [2024-11-19 10:52:21.168293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.168342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:88824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.868 [2024-11-19 10:52:21.168367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.168384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:88832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.868 [2024-11-19 10:52:21.168400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.168416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.868 [2024-11-19 10:52:21.168431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.168448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.868 [2024-11-19 10:52:21.168463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.168480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.868 [2024-11-19 10:52:21.168494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.168510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:88864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.868 [2024-11-19 10:52:21.168525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.168540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.868 [2024-11-19 10:52:21.168555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.168570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.868 [2024-11-19 10:52:21.168585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.168600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.868 [2024-11-19 10:52:21.168629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.168644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.868 [2024-11-19 10:52:21.168658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.168673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.868 [2024-11-19 10:52:21.168687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.168702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.868 [2024-11-19 10:52:21.168716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.168731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.868 [2024-11-19 10:52:21.168746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.168765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.868 [2024-11-19 10:52:21.168780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.168795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.868 [2024-11-19 10:52:21.168809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.168824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.868 [2024-11-19 10:52:21.168838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.168853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.868 [2024-11-19 10:52:21.168867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.168882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.868 [2024-11-19 10:52:21.168896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.168911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.868 [2024-11-19 10:52:21.168925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.168940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.868 [2024-11-19 10:52:21.168955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.168970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.868 [2024-11-19 10:52:21.168985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.169000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.868 [2024-11-19 10:52:21.169014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.169031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.868 [2024-11-19 10:52:21.169045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.169060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.868 [2024-11-19 10:52:21.169074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.169090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.868 [2024-11-19 10:52:21.169103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.169118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.868 [2024-11-19 10:52:21.169135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.868 [2024-11-19 10:52:21.169151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.869 [2024-11-19 10:52:21.169165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.869 [2024-11-19 10:52:21.169193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.869 [2024-11-19 10:52:21.169222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.869 [2024-11-19 10:52:21.169251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.869 [2024-11-19 10:52:21.169280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.869 [2024-11-19 10:52:21.169333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:88888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.869 [2024-11-19 10:52:21.169365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:88896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.869 [2024-11-19 10:52:21.169394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.869 [2024-11-19 10:52:21.169423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.869 [2024-11-19 10:52:21.169453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.869 [2024-11-19 10:52:21.169483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.869 [2024-11-19 10:52:21.169512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.869 [2024-11-19 10:52:21.169546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:88944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.869 [2024-11-19 10:52:21.169576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.869 [2024-11-19 10:52:21.169605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.869 [2024-11-19 10:52:21.169649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.869 [2024-11-19 10:52:21.169678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.869 [2024-11-19 10:52:21.169706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.869 [2024-11-19 10:52:21.169734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.869 [2024-11-19 10:52:21.169763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.869 [2024-11-19 10:52:21.169791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.869 [2024-11-19 10:52:21.169820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.869 [2024-11-19 10:52:21.169848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.869 [2024-11-19 10:52:21.169877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.869 [2024-11-19 10:52:21.169906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.869 [2024-11-19 10:52:21.169939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.869 [2024-11-19 10:52:21.169967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.169982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.869 [2024-11-19 10:52:21.169995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.170010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.869 [2024-11-19 10:52:21.170024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.170039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.869 [2024-11-19 10:52:21.170052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.869 [2024-11-19 10:52:21.170067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.869 [2024-11-19 10:52:21.170080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:89552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.170986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.170999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.171014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.171031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.870 [2024-11-19 10:52:21.171047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.870 [2024-11-19 10:52:21.171061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.871 [2024-11-19 10:52:21.171096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.871 [2024-11-19 10:52:21.171113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89592 len:8 PRP1 0x0 PRP2 0x0 00:24:44.871 [2024-11-19 10:52:21.171127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.871 [2024-11-19 10:52:21.171146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.871 [2024-11-19 10:52:21.171173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.871 [2024-11-19 10:52:21.171185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89600 len:8 PRP1 0x0 PRP2 0x0 00:24:44.871 [2024-11-19 10:52:21.171198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.871 [2024-11-19 10:52:21.171213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.871 [2024-11-19 10:52:21.171225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.871 [2024-11-19 10:52:21.171236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89608 len:8 PRP1 0x0 PRP2 0x0 00:24:44.871 [2024-11-19 10:52:21.171250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.871 [2024-11-19 10:52:21.171263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.871 [2024-11-19 10:52:21.171275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.871 [2024-11-19 10:52:21.171287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89616 len:8 PRP1 0x0 PRP2 0x0 00:24:44.871 [2024-11-19 10:52:21.171299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.871 [2024-11-19 10:52:21.171338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.871 [2024-11-19 10:52:21.171350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.871 [2024-11-19 10:52:21.171362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89624 len:8 PRP1 0x0 PRP2 0x0 00:24:44.871 [2024-11-19 10:52:21.171376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.871 [2024-11-19 10:52:21.171390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.871 [2024-11-19 10:52:21.171401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.871 [2024-11-19 10:52:21.171414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89632 len:8 PRP1 0x0 PRP2 0x0 00:24:44.871 [2024-11-19 10:52:21.171427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.871 [2024-11-19 10:52:21.171441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.871 [2024-11-19 10:52:21.171453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.871 [2024-11-19 10:52:21.171473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89640 len:8 PRP1 0x0 PRP2 0x0 00:24:44.871 [2024-11-19 10:52:21.171488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.871 [2024-11-19 10:52:21.171502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.871 [2024-11-19 10:52:21.171514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.871 [2024-11-19 10:52:21.171526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89648 len:8 PRP1 0x0 PRP2 0x0 00:24:44.871 [2024-11-19 10:52:21.171543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.871 [2024-11-19 10:52:21.171557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.871 [2024-11-19 10:52:21.171569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.871 [2024-11-19 10:52:21.171581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89656 len:8 PRP1 0x0 PRP2 0x0 00:24:44.871 [2024-11-19 10:52:21.171596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.871 [2024-11-19 10:52:21.171609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.871 [2024-11-19 10:52:21.171635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.871 [2024-11-19 10:52:21.171647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89664 len:8 PRP1 0x0 PRP2 0x0 00:24:44.871 [2024-11-19 10:52:21.171661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.871 [2024-11-19 10:52:21.171674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.871 [2024-11-19 10:52:21.171685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.871 [2024-11-19 10:52:21.171696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89672 len:8 PRP1 0x0 PRP2 0x0 00:24:44.871 [2024-11-19 10:52:21.171709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.871 [2024-11-19 10:52:21.171723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.871 [2024-11-19 10:52:21.171734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.871 [2024-11-19 10:52:21.171746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89680 len:8 PRP1 0x0 PRP2 0x0 00:24:44.871 [2024-11-19 10:52:21.171760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.871 [2024-11-19 10:52:21.171774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.871 [2024-11-19 10:52:21.171785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.871 [2024-11-19 10:52:21.171797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89688 len:8 PRP1 0x0 PRP2 0x0 00:24:44.871 [2024-11-19 10:52:21.171810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.871 [2024-11-19 10:52:21.171824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.871 [2024-11-19 10:52:21.171835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.871 [2024-11-19 10:52:21.171846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89696 len:8 PRP1 0x0 PRP2 0x0 00:24:44.871 [2024-11-19 10:52:21.171860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.871 [2024-11-19 10:52:21.171873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.871 [2024-11-19 10:52:21.171884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.871 [2024-11-19 10:52:21.171901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89704 len:8 PRP1 0x0 PRP2 0x0 00:24:44.871 [2024-11-19 10:52:21.171915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.871 [2024-11-19 10:52:21.171929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.871 [2024-11-19 10:52:21.171940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.871 [2024-11-19 10:52:21.171959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89712 len:8 PRP1 0x0 PRP2 0x0 00:24:44.871 [2024-11-19 10:52:21.171973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.871 [2024-11-19 10:52:21.171987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.871 [2024-11-19 10:52:21.171998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.871 [2024-11-19 10:52:21.172009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89720 len:8 PRP1 0x0 PRP2 0x0 00:24:44.871 [2024-11-19 10:52:21.172023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.871 [2024-11-19 10:52:21.172037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.871 [2024-11-19 10:52:21.172048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.871 [2024-11-19 10:52:21.172059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89728 len:8 PRP1 0x0 PRP2 0x0 00:24:44.871 [2024-11-19 10:52:21.172072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.871 [2024-11-19 10:52:21.172085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.871 [2024-11-19 10:52:21.172096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.871 [2024-11-19 10:52:21.172108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89736 len:8 PRP1 0x0 PRP2 0x0 00:24:44.871 [2024-11-19 10:52:21.172121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.871 [2024-11-19 10:52:21.172134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.872 [2024-11-19 10:52:21.172144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.872 [2024-11-19 10:52:21.172155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89744 len:8 PRP1 0x0 PRP2 0x0 00:24:44.872 [2024-11-19 10:52:21.172168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.872 [2024-11-19 10:52:21.172181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.872 [2024-11-19 10:52:21.172192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.872 [2024-11-19 10:52:21.172203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89752 len:8 PRP1 0x0 PRP2 0x0 00:24:44.872 [2024-11-19 10:52:21.172216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.872 [2024-11-19 10:52:21.172229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.872 [2024-11-19 10:52:21.172239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.872 [2024-11-19 10:52:21.172250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89760 len:8 PRP1 0x0 PRP2 0x0 00:24:44.872 [2024-11-19 10:52:21.172264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.872 [2024-11-19 10:52:21.172277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.872 [2024-11-19 10:52:21.172287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.872 [2024-11-19 10:52:21.172329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89768 len:8 PRP1 0x0 PRP2 0x0 00:24:44.872 [2024-11-19 10:52:21.172346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.872 [2024-11-19 10:52:21.172361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.872 [2024-11-19 10:52:21.172377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.872 [2024-11-19 10:52:21.172389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89776 len:8 PRP1 0x0 PRP2 0x0 00:24:44.872 [2024-11-19 10:52:21.172403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.872 [2024-11-19 10:52:21.172416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.872 [2024-11-19 10:52:21.172427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.872 [2024-11-19 10:52:21.172439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89784 len:8 PRP1 0x0 PRP2 0x0 00:24:44.872 [2024-11-19 10:52:21.172452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.872 [2024-11-19 10:52:21.172466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.872 [2024-11-19 10:52:21.172477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.872 [2024-11-19 10:52:21.172488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89792 len:8 PRP1 0x0 PRP2 0x0 00:24:44.872 [2024-11-19 10:52:21.172501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.872 [2024-11-19 10:52:21.172514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.872 [2024-11-19 10:52:21.172526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.872 [2024-11-19 10:52:21.172538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89800 len:8 PRP1 0x0 PRP2 0x0 00:24:44.872 [2024-11-19 10:52:21.172551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.872 [2024-11-19 10:52:21.172564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.872 [2024-11-19 10:52:21.172576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.872 [2024-11-19 10:52:21.172587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89808 len:8 PRP1 0x0 PRP2 0x0 00:24:44.872 [2024-11-19 10:52:21.172600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.872 [2024-11-19 10:52:21.172628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.872 [2024-11-19 10:52:21.172640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.872 [2024-11-19 10:52:21.172651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89816 len:8 PRP1 0x0 PRP2 0x0 00:24:44.872 [2024-11-19 10:52:21.172664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.872 [2024-11-19 10:52:21.172677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.872 [2024-11-19 10:52:21.172688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.872 [2024-11-19 10:52:21.172699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89824 len:8 PRP1 0x0 PRP2 0x0 00:24:44.872 [2024-11-19 10:52:21.172713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.872 [2024-11-19 10:52:21.172725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.872 [2024-11-19 10:52:21.172736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.872 [2024-11-19 10:52:21.172753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89832 len:8 PRP1 0x0 PRP2 0x0 00:24:44.872 [2024-11-19 10:52:21.172767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.872 [2024-11-19 10:52:21.172784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.872 [2024-11-19 10:52:21.172796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.872 [2024-11-19 10:52:21.172807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88960 len:8 PRP1 0x0 PRP2 0x0 00:24:44.872 [2024-11-19 10:52:21.172820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.872 [2024-11-19 10:52:21.172834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.872 [2024-11-19 10:52:21.172845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.872 [2024-11-19 10:52:21.172856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88968 len:8 PRP1 0x0 PRP2 0x0 00:24:44.872 [2024-11-19 10:52:21.172869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.872 [2024-11-19 10:52:21.172882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.872 [2024-11-19 10:52:21.172893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.872 [2024-11-19 10:52:21.172913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88976 len:8 PRP1 0x0 PRP2 0x0 00:24:44.872 [2024-11-19 10:52:21.172927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.872 [2024-11-19 10:52:21.172941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.872 [2024-11-19 10:52:21.172951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.872 [2024-11-19 10:52:21.172963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88984 len:8 PRP1 0x0 PRP2 0x0 00:24:44.872 [2024-11-19 10:52:21.172976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.872 [2024-11-19 10:52:21.172989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.872 [2024-11-19 10:52:21.173001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.872 [2024-11-19 10:52:21.173012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88992 len:8 PRP1 0x0 PRP2 0x0 00:24:44.872 [2024-11-19 10:52:21.173025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.872 [2024-11-19 10:52:21.173037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.872 [2024-11-19 10:52:21.173049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.872 [2024-11-19 10:52:21.173060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89000 len:8 PRP1 0x0 PRP2 0x0 00:24:44.872 [2024-11-19 10:52:21.173073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.872 [2024-11-19 10:52:21.173086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.872 [2024-11-19 10:52:21.173097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.872 [2024-11-19 10:52:21.173108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89008 len:8 PRP1 0x0 PRP2 0x0 00:24:44.872 [2024-11-19 10:52:21.173122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.872 [2024-11-19 10:52:21.173190] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:44.873 [2024-11-19 10:52:21.173231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.873 [2024-11-19 10:52:21.173275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:21.173293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.873 [2024-11-19 10:52:21.173315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:21.173331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.873 [2024-11-19 10:52:21.173344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:21.173358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.873 [2024-11-19 10:52:21.173371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:21.173385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:44.873 [2024-11-19 10:52:21.173430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c2560 (9): Bad file descriptor 00:24:44.873 [2024-11-19 10:52:21.176691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:44.873 [2024-11-19 10:52:21.200316] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:24:44.873 8316.60 IOPS, 32.49 MiB/s [2024-11-19T09:52:32.496Z] 8392.67 IOPS, 32.78 MiB/s [2024-11-19T09:52:32.496Z] 8466.00 IOPS, 33.07 MiB/s [2024-11-19T09:52:32.496Z] 8509.25 IOPS, 33.24 MiB/s [2024-11-19T09:52:32.496Z] 8536.11 IOPS, 33.34 MiB/s [2024-11-19T09:52:32.496Z] [2024-11-19 10:52:25.727764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.727829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.727866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.727883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.727899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.727915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.727931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.727945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.727961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.727976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.728007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.728023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.728038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.728051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.728078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.728093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.728109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.728123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.728137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.728151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.728166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.728180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.728194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.728208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.728223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.728236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.728251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.728264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.728278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.728291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.728327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.728344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.728361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.728375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.728391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.728405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.728422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.728436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.728452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.728470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.728486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.728501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.728518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.728532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.728547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.728561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.728577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.728591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.728606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.728634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.873 [2024-11-19 10:52:25.728650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.873 [2024-11-19 10:52:25.728663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.728678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.728692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.728706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.728720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.728734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.728748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.728763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.728777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.728792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.728805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.728820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.728834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.728849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.728867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.728882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.728896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.728912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.728926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.728941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.728955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.728969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.728983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.728997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.729011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.729026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.729039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.729054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.729068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.729083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.729097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.729111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.729125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.729140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.729153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.729167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.729181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.729196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.729209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.729228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.729242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.729256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.729270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.729294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.729330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.729348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.729362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.729378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.729393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.729408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.729423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.729438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.729453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.729468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.729483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.729498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.874 [2024-11-19 10:52:25.729513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.874 [2024-11-19 10:52:25.729528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.875 [2024-11-19 10:52:25.729542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.729558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.729572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.729587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.729601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.729631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.729649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.729666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.729679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.729694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.729709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.729724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.729738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.729753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.729769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.729784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.729798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.729814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.729827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.729843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.729857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.729872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.729885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.729901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.729915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.729930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.729945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.729961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.729991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.730007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.730022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.730040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.875 [2024-11-19 10:52:25.730056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.730071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.730086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.730101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.730115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.730131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.730146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.730161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.730175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.730191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.730206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.730222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.730236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.730252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.730266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.730281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.730295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.730338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.730354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.730371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.730386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.730402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.730417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.730435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.730451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.730472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.730488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.730504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.730519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.730535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.875 [2024-11-19 10:52:25.730550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.875 [2024-11-19 10:52:25.730566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.730587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.730604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.730634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.730654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.730669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.730684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.730699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.730715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.730730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.730745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.730759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.730775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.730791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.730807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.730821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.730836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.730850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.730866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.730884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.730901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.730915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.730930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.730945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.730961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.730975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.730990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.731004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.731020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.731035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.731050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.731064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.731079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.731093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.731109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.731123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.731139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.731152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.731167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.731182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.731198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.731212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.731227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.731242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.731261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.731276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.731291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.731329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.731347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.731363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.731379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.876 [2024-11-19 10:52:25.731394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.731409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.731424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.731440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.731456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.731472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.731487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.731503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.731518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.731535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.731550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.731567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.731581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.876 [2024-11-19 10:52:25.731597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.876 [2024-11-19 10:52:25.731613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.877 [2024-11-19 10:52:25.731645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.877 [2024-11-19 10:52:25.731659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.877 [2024-11-19 10:52:25.731674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.877 [2024-11-19 10:52:25.731693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.877 [2024-11-19 10:52:25.731709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.877 [2024-11-19 10:52:25.731724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.877 [2024-11-19 10:52:25.731740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.877 [2024-11-19 10:52:25.731754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.877 [2024-11-19 10:52:25.731770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.877 [2024-11-19 10:52:25.731785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.877 [2024-11-19 10:52:25.731800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.877 [2024-11-19 10:52:25.731815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.877 [2024-11-19 10:52:25.731830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.877 [2024-11-19 10:52:25.731845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.877 [2024-11-19 10:52:25.731863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.877 [2024-11-19 10:52:25.731878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.877 [2024-11-19 10:52:25.731893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6efba0 is same with the state(6) to be set 00:24:44.877 [2024-11-19 10:52:25.731913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:44.877 [2024-11-19 10:52:25.731926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:44.877 [2024-11-19 10:52:25.731938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22048 len:8 PRP1 0x0 PRP2 0x0 00:24:44.877 [2024-11-19 10:52:25.731952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.877 [2024-11-19 10:52:25.732026] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:44.877 [2024-11-19 10:52:25.732081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.877 [2024-11-19 10:52:25.732101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.877 [2024-11-19 10:52:25.732123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.877 [2024-11-19 10:52:25.732140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.877 [2024-11-19 10:52:25.732155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.877 [2024-11-19 10:52:25.732169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.877 [2024-11-19 10:52:25.732184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.877 [2024-11-19 10:52:25.732203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.877 [2024-11-19 10:52:25.732220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:44.877 [2024-11-19 10:52:25.732289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c2560 (9): Bad file descriptor 00:24:44.877 [2024-11-19 10:52:25.735561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:44.877 [2024-11-19 10:52:25.807208] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:24:44.877 8472.60 IOPS, 33.10 MiB/s [2024-11-19T09:52:32.500Z] 8485.82 IOPS, 33.15 MiB/s [2024-11-19T09:52:32.500Z] 8502.92 IOPS, 33.21 MiB/s [2024-11-19T09:52:32.500Z] 8523.08 IOPS, 33.29 MiB/s [2024-11-19T09:52:32.500Z] 8521.00 IOPS, 33.29 MiB/s [2024-11-19T09:52:32.500Z] 8525.73 IOPS, 33.30 MiB/s 00:24:44.877 Latency(us) 00:24:44.877 [2024-11-19T09:52:32.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.877 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:44.877 Verification LBA range: start 0x0 length 0x4000 00:24:44.877 NVMe0n1 : 15.01 8525.68 33.30 463.34 0.00 14212.22 546.13 16699.54 00:24:44.877 [2024-11-19T09:52:32.500Z] =================================================================================================================== 00:24:44.877 [2024-11-19T09:52:32.500Z] Total : 8525.68 33.30 463.34 0.00 14212.22 546.13 16699.54 00:24:44.877 Received shutdown signal, test time was about 15.000000 seconds 00:24:44.877 00:24:44.877 Latency(us) 00:24:44.877 [2024-11-19T09:52:32.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.877 [2024-11-19T09:52:32.500Z] =================================================================================================================== 00:24:44.877 [2024-11-19T09:52:32.500Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:44.877 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:44.877 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:44.877 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:44.877 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1416491 00:24:44.877 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:44.877 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1416491 /var/tmp/bdevperf.sock 00:24:44.877 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1416491 ']' 00:24:44.877 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:44.877 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:44.877 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:44.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:44.877 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:44.877 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:44.877 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:44.877 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:44.877 10:52:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:44.877 [2024-11-19 10:52:32.131093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:44.877 10:52:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:44.877 [2024-11-19 10:52:32.415886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:44.877 10:52:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:45.444 NVMe0n1 00:24:45.444 10:52:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:45.702 00:24:45.702 10:52:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:46.268 00:24:46.268 10:52:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:46.268 10:52:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:46.526 10:52:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:46.784 10:52:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:50.066 10:52:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:50.066 10:52:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:50.066 10:52:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1417164 00:24:50.066 10:52:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:50.066 10:52:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1417164 00:24:51.001 { 00:24:51.001 "results": [ 00:24:51.001 { 00:24:51.001 "job": "NVMe0n1", 00:24:51.001 "core_mask": "0x1", 00:24:51.001 "workload": "verify", 00:24:51.001 "status": "finished", 00:24:51.001 "verify_range": { 00:24:51.001 "start": 0, 00:24:51.001 "length": 16384 00:24:51.001 }, 00:24:51.001 "queue_depth": 128, 00:24:51.001 "io_size": 4096, 00:24:51.001 "runtime": 1.006936, 00:24:51.001 "iops": 8596.375539259694, 00:24:51.001 "mibps": 33.57959195023318, 00:24:51.001 "io_failed": 0, 00:24:51.001 "io_timeout": 0, 00:24:51.001 "avg_latency_us": 14820.07589785719, 00:24:51.001 "min_latency_us": 722.1096296296296, 00:24:51.001 "max_latency_us": 13981.013333333334 00:24:51.001 } 00:24:51.001 ], 00:24:51.001 "core_count": 1 00:24:51.001 } 00:24:51.258 10:52:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:51.258 [2024-11-19 10:52:31.643566] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:24:51.258 [2024-11-19 10:52:31.643671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1416491 ] 00:24:51.258 [2024-11-19 10:52:31.711309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.259 [2024-11-19 10:52:31.767723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.259 [2024-11-19 10:52:34.208930] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:51.259 [2024-11-19 10:52:34.209023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.259 [2024-11-19 10:52:34.209049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.259 [2024-11-19 10:52:34.209065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.259 [2024-11-19 10:52:34.209079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.259 [2024-11-19 10:52:34.209093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.259 [2024-11-19 10:52:34.209106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.259 [2024-11-19 10:52:34.209120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.259 [2024-11-19 10:52:34.209134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.259 [2024-11-19 10:52:34.209148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:51.259 [2024-11-19 10:52:34.209190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:51.259 [2024-11-19 10:52:34.209220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183c560 (9): Bad file descriptor 00:24:51.259 [2024-11-19 10:52:34.341426] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:51.259 Running I/O for 1 seconds... 00:24:51.259 8528.00 IOPS, 33.31 MiB/s 00:24:51.259 Latency(us) 00:24:51.259 [2024-11-19T09:52:38.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.259 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:51.259 Verification LBA range: start 0x0 length 0x4000 00:24:51.259 NVMe0n1 : 1.01 8596.38 33.58 0.00 0.00 14820.08 722.11 13981.01 00:24:51.259 [2024-11-19T09:52:38.882Z] =================================================================================================================== 00:24:51.259 [2024-11-19T09:52:38.882Z] Total : 8596.38 33.58 0.00 0.00 14820.08 722.11 13981.01 00:24:51.259 10:52:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:51.259 10:52:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:51.516 10:52:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:51.774 10:52:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:51.774 10:52:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:52.031 10:52:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:52.288 10:52:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:55.574 10:52:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:55.574 10:52:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:55.574 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1416491 00:24:55.574 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1416491 ']' 00:24:55.575 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1416491 00:24:55.575 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:55.575 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:55.575 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1416491 00:24:55.575 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:55.575 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:55.575 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1416491' 00:24:55.575 killing process with pid 1416491 00:24:55.575 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1416491 00:24:55.575 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1416491 00:24:55.832 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:55.832 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:56.090 rmmod nvme_tcp 00:24:56.090 rmmod nvme_fabrics 00:24:56.090 rmmod nvme_keyring 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1414218 ']' 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1414218 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1414218 ']' 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1414218 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1414218 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1414218' 00:24:56.090 killing process with pid 1414218 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1414218 00:24:56.090 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1414218 00:24:56.349 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:56.349 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:56.349 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:56.349 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:56.349 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:56.349 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:56.349 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:56.349 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:56.349 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:56.349 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.349 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:56.349 10:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.883 10:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:58.883 00:24:58.883 real 0m35.899s 00:24:58.883 user 2m6.789s 00:24:58.883 sys 0m5.918s 00:24:58.883 10:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:58.883 10:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:58.883 ************************************ 00:24:58.883 END TEST nvmf_failover 00:24:58.883 ************************************ 00:24:58.883 10:52:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:58.883 10:52:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:58.883 10:52:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:58.883 10:52:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.883 ************************************ 00:24:58.883 START TEST nvmf_host_discovery 00:24:58.883 ************************************ 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:58.883 * Looking for test storage... 00:24:58.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:58.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.883 --rc genhtml_branch_coverage=1 00:24:58.883 --rc genhtml_function_coverage=1 00:24:58.883 --rc genhtml_legend=1 00:24:58.883 --rc geninfo_all_blocks=1 00:24:58.883 --rc geninfo_unexecuted_blocks=1 00:24:58.883 00:24:58.883 ' 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:58.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.883 --rc genhtml_branch_coverage=1 00:24:58.883 --rc genhtml_function_coverage=1 00:24:58.883 --rc genhtml_legend=1 00:24:58.883 --rc geninfo_all_blocks=1 00:24:58.883 --rc geninfo_unexecuted_blocks=1 00:24:58.883 00:24:58.883 ' 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:58.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.883 --rc genhtml_branch_coverage=1 00:24:58.883 --rc genhtml_function_coverage=1 00:24:58.883 --rc genhtml_legend=1 00:24:58.883 --rc geninfo_all_blocks=1 00:24:58.883 --rc geninfo_unexecuted_blocks=1 00:24:58.883 00:24:58.883 ' 00:24:58.883 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:58.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.883 --rc genhtml_branch_coverage=1 00:24:58.883 --rc genhtml_function_coverage=1 00:24:58.883 --rc genhtml_legend=1 00:24:58.883 --rc geninfo_all_blocks=1 00:24:58.883 --rc geninfo_unexecuted_blocks=1 00:24:58.884 00:24:58.884 ' 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:58.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:58.884 10:52:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:00.785 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:00.786 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:00.786 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:00.786 Found net devices under 0000:09:00.0: cvl_0_0 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:00.786 Found net devices under 0000:09:00.1: cvl_0_1 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:00.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:25:00.786 00:25:00.786 --- 10.0.0.2 ping statistics --- 00:25:00.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.786 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:25:00.786 00:25:00.786 --- 10.0.0.1 ping statistics --- 00:25:00.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.786 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1419902 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1419902 00:25:00.786 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1419902 ']' 00:25:00.787 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.044 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:01.044 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.044 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:01.045 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.045 [2024-11-19 10:52:48.453504] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:25:01.045 [2024-11-19 10:52:48.453575] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.045 [2024-11-19 10:52:48.524650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.045 [2024-11-19 10:52:48.578410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:01.045 [2024-11-19 10:52:48.578465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:01.045 [2024-11-19 10:52:48.578480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:01.045 [2024-11-19 10:52:48.578491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:01.045 [2024-11-19 10:52:48.578501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:01.045 [2024-11-19 10:52:48.579047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.303 [2024-11-19 10:52:48.763774] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.303 [2024-11-19 10:52:48.771978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.303 null0 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.303 null1 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1419922 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1419922 /tmp/host.sock 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1419922 ']' 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:01.303 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:01.303 10:52:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.303 [2024-11-19 10:52:48.846633] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:25:01.303 [2024-11-19 10:52:48.846698] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1419922 ] 00:25:01.303 [2024-11-19 10:52:48.910972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.562 [2024-11-19 10:52:48.969170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:01.562 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:01.820 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:01.821 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.821 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.821 [2024-11-19 10:52:49.393702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.821 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.821 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:01.821 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:01.821 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:01.821 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.821 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.821 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:01.821 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:01.821 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.821 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:01.821 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:01.821 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.821 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.821 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:01.821 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.821 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:01.821 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:02.078 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.078 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:02.078 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:02.078 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:02.078 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:02.078 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:02.078 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:02.078 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:02.079 10:52:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:02.643 [2024-11-19 10:52:50.181903] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:02.643 [2024-11-19 10:52:50.181938] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:02.643 [2024-11-19 10:52:50.181959] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:02.900 [2024-11-19 10:52:50.268253] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:02.900 [2024-11-19 10:52:50.451494] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:02.900 [2024-11-19 10:52:50.452520] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1583f80:1 started. 00:25:02.900 [2024-11-19 10:52:50.454232] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:02.900 [2024-11-19 10:52:50.454252] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:02.900 [2024-11-19 10:52:50.460046] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1583f80 was disconnected and freed. delete nvme_qpair. 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:03.158 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:03.159 [2024-11-19 10:52:50.754443] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x15843c0:1 started. 00:25:03.159 [2024-11-19 10:52:50.760580] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x15843c0 was disconnected and freed. delete nvme_qpair. 00:25:03.159 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.417 [2024-11-19 10:52:50.841826] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:03.417 [2024-11-19 10:52:50.842098] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:03.417 [2024-11-19 10:52:50.842127] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:03.417 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.418 [2024-11-19 10:52:50.929843] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:03.418 10:52:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:03.676 [2024-11-19 10:52:51.237425] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:03.676 [2024-11-19 10:52:51.237479] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:03.676 [2024-11-19 10:52:51.237494] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:03.676 [2024-11-19 10:52:51.237502] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:04.612 10:52:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:04.612 10:52:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:04.612 10:52:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:04.612 10:52:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:04.612 10:52:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:04.612 10:52:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.612 10:52:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.612 10:52:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:04.612 10:52:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:04.612 10:52:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.612 [2024-11-19 10:52:52.073990] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:04.612 [2024-11-19 10:52:52.074033] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:04.612 [2024-11-19 10:52:52.075781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.612 [2024-11-19 10:52:52.075815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.612 [2024-11-19 10:52:52.075846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.612 [2024-11-19 10:52:52.075860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.612 [2024-11-19 10:52:52.075873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.612 [2024-11-19 10:52:52.075886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.612 [2024-11-19 10:52:52.075901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.612 [2024-11-19 10:52:52.075914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.612 [2024-11-19 10:52:52.075927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1554550 is same with the state(6) to be set 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:04.612 [2024-11-19 10:52:52.085769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1554550 (9): Bad file descriptor 00:25:04.612 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.613 [2024-11-19 10:52:52.095811] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:04.613 [2024-11-19 10:52:52.095832] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:04.613 [2024-11-19 10:52:52.095841] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:04.613 [2024-11-19 10:52:52.095850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:04.613 [2024-11-19 10:52:52.095881] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:04.613 [2024-11-19 10:52:52.096147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.613 [2024-11-19 10:52:52.096177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1554550 with addr=10.0.0.2, port=4420 00:25:04.613 [2024-11-19 10:52:52.096194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1554550 is same with the state(6) to be set 00:25:04.613 [2024-11-19 10:52:52.096218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1554550 (9): Bad file descriptor 00:25:04.613 [2024-11-19 10:52:52.096253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:04.613 [2024-11-19 10:52:52.096287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:04.613 [2024-11-19 10:52:52.096314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:04.613 [2024-11-19 10:52:52.096345] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:04.613 [2024-11-19 10:52:52.096356] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:04.613 [2024-11-19 10:52:52.096364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:04.613 [2024-11-19 10:52:52.105914] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:04.613 [2024-11-19 10:52:52.105934] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:04.613 [2024-11-19 10:52:52.105942] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:04.613 [2024-11-19 10:52:52.105949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:04.613 [2024-11-19 10:52:52.105972] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:04.613 [2024-11-19 10:52:52.106199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.613 [2024-11-19 10:52:52.106229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1554550 with addr=10.0.0.2, port=4420 00:25:04.613 [2024-11-19 10:52:52.106245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1554550 is same with the state(6) to be set 00:25:04.613 [2024-11-19 10:52:52.106267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1554550 (9): Bad file descriptor 00:25:04.613 [2024-11-19 10:52:52.106299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:04.613 [2024-11-19 10:52:52.106329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:04.613 [2024-11-19 10:52:52.106343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:04.613 [2024-11-19 10:52:52.106355] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:04.613 [2024-11-19 10:52:52.106364] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:04.613 [2024-11-19 10:52:52.106372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:04.613 [2024-11-19 10:52:52.116006] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:04.613 [2024-11-19 10:52:52.116026] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:04.613 [2024-11-19 10:52:52.116035] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:04.613 [2024-11-19 10:52:52.116041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:04.613 [2024-11-19 10:52:52.116069] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:04.613 [2024-11-19 10:52:52.116233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.613 [2024-11-19 10:52:52.116261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1554550 with addr=10.0.0.2, port=4420 00:25:04.613 [2024-11-19 10:52:52.116277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1554550 is same with the state(6) to be set 00:25:04.613 [2024-11-19 10:52:52.116323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1554550 (9): Bad file descriptor 00:25:04.613 [2024-11-19 10:52:52.116391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:04.613 [2024-11-19 10:52:52.116412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:04.613 [2024-11-19 10:52:52.116427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:04.613 [2024-11-19 10:52:52.116440] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:04.613 [2024-11-19 10:52:52.116448] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:04.613 [2024-11-19 10:52:52.116456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:04.613 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.613 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:04.613 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:04.613 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:04.613 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:04.613 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:04.613 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:04.613 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:04.613 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:04.613 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.613 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:04.613 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.613 [2024-11-19 10:52:52.126103] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:04.613 [2024-11-19 10:52:52.126124] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:04.613 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:04.613 [2024-11-19 10:52:52.126133] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:04.613 [2024-11-19 10:52:52.126143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:04.613 [2024-11-19 10:52:52.126168] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:04.613 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:04.613 [2024-11-19 10:52:52.126317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.613 [2024-11-19 10:52:52.126357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1554550 with addr=10.0.0.2, port=4420 00:25:04.613 [2024-11-19 10:52:52.126383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1554550 is same with the state(6) to be set 00:25:04.613 [2024-11-19 10:52:52.126407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1554550 (9): Bad file descriptor 00:25:04.613 [2024-11-19 10:52:52.126439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:04.613 [2024-11-19 10:52:52.126457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:04.613 [2024-11-19 10:52:52.126471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:04.613 [2024-11-19 10:52:52.126484] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:04.613 [2024-11-19 10:52:52.126493] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:04.613 [2024-11-19 10:52:52.126500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:04.613 [2024-11-19 10:52:52.136202] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:04.614 [2024-11-19 10:52:52.136225] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:04.614 [2024-11-19 10:52:52.136234] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:04.614 [2024-11-19 10:52:52.136241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:04.614 [2024-11-19 10:52:52.136265] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:04.614 [2024-11-19 10:52:52.136415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.614 [2024-11-19 10:52:52.136444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1554550 with addr=10.0.0.2, port=4420 00:25:04.614 [2024-11-19 10:52:52.136460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1554550 is same with the state(6) to be set 00:25:04.614 [2024-11-19 10:52:52.136482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1554550 (9): Bad file descriptor 00:25:04.614 [2024-11-19 10:52:52.136515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:04.614 [2024-11-19 10:52:52.136532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:04.614 [2024-11-19 10:52:52.136547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:04.614 [2024-11-19 10:52:52.136571] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:04.614 [2024-11-19 10:52:52.136580] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:04.614 [2024-11-19 10:52:52.136587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:04.614 [2024-11-19 10:52:52.146310] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:04.614 [2024-11-19 10:52:52.146332] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:04.614 [2024-11-19 10:52:52.146341] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:04.614 [2024-11-19 10:52:52.146349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:04.614 [2024-11-19 10:52:52.146375] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:04.614 [2024-11-19 10:52:52.146472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.614 [2024-11-19 10:52:52.146499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1554550 with addr=10.0.0.2, port=4420 00:25:04.614 [2024-11-19 10:52:52.146521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1554550 is same with the state(6) to be set 00:25:04.614 [2024-11-19 10:52:52.146544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1554550 (9): Bad file descriptor 00:25:04.614 [2024-11-19 10:52:52.146577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:04.614 [2024-11-19 10:52:52.146596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:04.614 [2024-11-19 10:52:52.146610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:04.614 [2024-11-19 10:52:52.146622] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:04.614 [2024-11-19 10:52:52.146631] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:04.614 [2024-11-19 10:52:52.146639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:04.614 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.614 [2024-11-19 10:52:52.156410] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:04.614 [2024-11-19 10:52:52.156433] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:04.614 [2024-11-19 10:52:52.156443] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:04.614 [2024-11-19 10:52:52.156450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:04.614 [2024-11-19 10:52:52.156476] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:04.614 [2024-11-19 10:52:52.156618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.614 [2024-11-19 10:52:52.156646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1554550 with addr=10.0.0.2, port=4420 00:25:04.614 [2024-11-19 10:52:52.156678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1554550 is same with the state(6) to be set 00:25:04.614 [2024-11-19 10:52:52.156700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1554550 (9): Bad file descriptor 00:25:04.614 [2024-11-19 10:52:52.156746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:04.614 [2024-11-19 10:52:52.156765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:04.614 [2024-11-19 10:52:52.156778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:04.614 [2024-11-19 10:52:52.156790] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:04.614 [2024-11-19 10:52:52.156799] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:04.614 [2024-11-19 10:52:52.156806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:04.614 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:04.614 [2024-11-19 10:52:52.166511] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:04.614 [2024-11-19 10:52:52.166534] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:04.614 [2024-11-19 10:52:52.166543] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:04.614 [2024-11-19 10:52:52.166550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:04.614 [2024-11-19 10:52:52.166580] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:04.614 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:04.614 [2024-11-19 10:52:52.166767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.614 [2024-11-19 10:52:52.166797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1554550 with addr=10.0.0.2, port=4420 00:25:04.614 [2024-11-19 10:52:52.166814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1554550 is same with the state(6) to be set 00:25:04.614 [2024-11-19 10:52:52.166836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1554550 (9): Bad file descriptor 00:25:04.614 [2024-11-19 10:52:52.166871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:04.614 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:04.614 [2024-11-19 10:52:52.166890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:04.614 [2024-11-19 10:52:52.166905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:04.614 [2024-11-19 10:52:52.166917] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:04.614 [2024-11-19 10:52:52.166925] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:04.614 [2024-11-19 10:52:52.166933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:04.614 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:04.614 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:04.614 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:04.614 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:04.614 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:04.614 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:04.614 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:04.614 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.614 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.614 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:04.614 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:04.615 [2024-11-19 10:52:52.176615] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:04.615 [2024-11-19 10:52:52.176654] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:04.615 [2024-11-19 10:52:52.176664] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:04.615 [2024-11-19 10:52:52.176671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:04.615 [2024-11-19 10:52:52.176695] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:04.615 [2024-11-19 10:52:52.176879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.615 [2024-11-19 10:52:52.176907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1554550 with addr=10.0.0.2, port=4420 00:25:04.615 [2024-11-19 10:52:52.176928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1554550 is same with the state(6) to be set 00:25:04.615 [2024-11-19 10:52:52.176951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1554550 (9): Bad file descriptor 00:25:04.615 [2024-11-19 10:52:52.176998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:04.615 [2024-11-19 10:52:52.177016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:04.615 [2024-11-19 10:52:52.177030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:04.615 [2024-11-19 10:52:52.177042] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:04.615 [2024-11-19 10:52:52.177051] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:04.615 [2024-11-19 10:52:52.177058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:04.615 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.615 [2024-11-19 10:52:52.186728] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:04.615 [2024-11-19 10:52:52.186748] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:04.615 [2024-11-19 10:52:52.186757] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:04.615 [2024-11-19 10:52:52.186764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:04.615 [2024-11-19 10:52:52.186786] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:04.615 [2024-11-19 10:52:52.187009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.615 [2024-11-19 10:52:52.187037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1554550 with addr=10.0.0.2, port=4420 00:25:04.615 [2024-11-19 10:52:52.187053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1554550 is same with the state(6) to be set 00:25:04.615 [2024-11-19 10:52:52.187075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1554550 (9): Bad file descriptor 00:25:04.615 [2024-11-19 10:52:52.187110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:04.615 [2024-11-19 10:52:52.187128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:04.615 [2024-11-19 10:52:52.187143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:04.615 [2024-11-19 10:52:52.187155] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:04.615 [2024-11-19 10:52:52.187164] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:04.615 [2024-11-19 10:52:52.187171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:04.615 [2024-11-19 10:52:52.196821] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:04.615 [2024-11-19 10:52:52.196840] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:04.615 [2024-11-19 10:52:52.196849] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:04.615 [2024-11-19 10:52:52.196856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:04.615 [2024-11-19 10:52:52.196879] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:04.615 [2024-11-19 10:52:52.197010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.615 [2024-11-19 10:52:52.197042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1554550 with addr=10.0.0.2, port=4420 00:25:04.615 [2024-11-19 10:52:52.197059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1554550 is same with the state(6) to be set 00:25:04.615 [2024-11-19 10:52:52.197080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1554550 (9): Bad file descriptor 00:25:04.615 [2024-11-19 10:52:52.197101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:04.615 [2024-11-19 10:52:52.197114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:04.615 [2024-11-19 10:52:52.197127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:04.615 [2024-11-19 10:52:52.197139] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:04.615 [2024-11-19 10:52:52.197148] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:04.615 [2024-11-19 10:52:52.197155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:04.615 [2024-11-19 10:52:52.201213] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:04.615 [2024-11-19 10:52:52.201239] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:04.615 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:25:04.615 10:52:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:05.988 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.989 10:52:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.988 [2024-11-19 10:52:54.502891] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:06.988 [2024-11-19 10:52:54.502921] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:06.988 [2024-11-19 10:52:54.502944] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:07.246 [2024-11-19 10:52:54.630358] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:07.246 [2024-11-19 10:52:54.735144] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:07.246 [2024-11-19 10:52:54.735927] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x157da30:1 started. 00:25:07.246 [2024-11-19 10:52:54.738090] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:07.246 [2024-11-19 10:52:54.738129] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.246 request: 00:25:07.246 { 00:25:07.246 "name": "nvme", 00:25:07.246 "trtype": "tcp", 00:25:07.246 "traddr": "10.0.0.2", 00:25:07.246 "adrfam": "ipv4", 00:25:07.246 "trsvcid": "8009", 00:25:07.246 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:07.246 "wait_for_attach": true, 00:25:07.246 "method": "bdev_nvme_start_discovery", 00:25:07.246 "req_id": 1 00:25:07.246 } 00:25:07.246 Got JSON-RPC error response 00:25:07.246 response: 00:25:07.246 { 00:25:07.246 "code": -17, 00:25:07.246 "message": "File exists" 00:25:07.246 } 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.246 [2024-11-19 10:52:54.782030] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x157da30 was disconnected and freed. delete nvme_qpair. 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.246 request: 00:25:07.246 { 00:25:07.246 "name": "nvme_second", 00:25:07.246 "trtype": "tcp", 00:25:07.246 "traddr": "10.0.0.2", 00:25:07.246 "adrfam": "ipv4", 00:25:07.246 "trsvcid": "8009", 00:25:07.246 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:07.246 "wait_for_attach": true, 00:25:07.246 "method": "bdev_nvme_start_discovery", 00:25:07.246 "req_id": 1 00:25:07.246 } 00:25:07.246 Got JSON-RPC error response 00:25:07.246 response: 00:25:07.246 { 00:25:07.246 "code": -17, 00:25:07.246 "message": "File exists" 00:25:07.246 } 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:07.246 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.503 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:07.503 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:07.503 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:07.503 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.503 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:07.503 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.503 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:07.503 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:07.503 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.503 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:07.503 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:07.503 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:07.503 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:07.503 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:07.503 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.503 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:07.503 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.503 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:07.503 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.503 10:52:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.432 [2024-11-19 10:52:55.937463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.432 [2024-11-19 10:52:55.937505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x156e560 with addr=10.0.0.2, port=8010 00:25:08.432 [2024-11-19 10:52:55.937531] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:08.432 [2024-11-19 10:52:55.937546] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:08.432 [2024-11-19 10:52:55.937558] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:09.361 [2024-11-19 10:52:56.939931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.361 [2024-11-19 10:52:56.939986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x156e560 with addr=10.0.0.2, port=8010 00:25:09.361 [2024-11-19 10:52:56.940018] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:09.361 [2024-11-19 10:52:56.940033] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:09.361 [2024-11-19 10:52:56.940046] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:10.736 [2024-11-19 10:52:57.942146] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:10.736 request: 00:25:10.736 { 00:25:10.736 "name": "nvme_second", 00:25:10.736 "trtype": "tcp", 00:25:10.736 "traddr": "10.0.0.2", 00:25:10.736 "adrfam": "ipv4", 00:25:10.736 "trsvcid": "8010", 00:25:10.736 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:10.736 "wait_for_attach": false, 00:25:10.736 "attach_timeout_ms": 3000, 00:25:10.736 "method": "bdev_nvme_start_discovery", 00:25:10.736 "req_id": 1 00:25:10.736 } 00:25:10.736 Got JSON-RPC error response 00:25:10.736 response: 00:25:10.736 { 00:25:10.736 "code": -110, 00:25:10.736 "message": "Connection timed out" 00:25:10.736 } 00:25:10.736 10:52:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:10.736 10:52:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:10.736 10:52:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:10.736 10:52:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:10.736 10:52:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:10.736 10:52:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:10.736 10:52:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:10.736 10:52:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:10.736 10:52:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.736 10:52:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:10.736 10:52:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.736 10:52:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:10.736 10:52:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.736 10:52:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:10.736 10:52:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:10.736 10:52:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1419922 00:25:10.736 10:52:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:10.736 10:52:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:10.736 10:52:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:10.736 10:52:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:10.736 10:52:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:10.736 10:52:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:10.736 10:52:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:10.736 rmmod nvme_tcp 00:25:10.736 rmmod nvme_fabrics 00:25:10.736 rmmod nvme_keyring 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1419902 ']' 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1419902 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1419902 ']' 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1419902 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1419902 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1419902' 00:25:10.736 killing process with pid 1419902 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1419902 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1419902 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:10.736 10:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.272 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:13.272 00:25:13.272 real 0m14.345s 00:25:13.272 user 0m21.243s 00:25:13.272 sys 0m2.970s 00:25:13.272 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:13.272 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.272 ************************************ 00:25:13.272 END TEST nvmf_host_discovery 00:25:13.272 ************************************ 00:25:13.272 10:53:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:13.272 10:53:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:13.272 10:53:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:13.272 10:53:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.272 ************************************ 00:25:13.272 START TEST nvmf_host_multipath_status 00:25:13.272 ************************************ 00:25:13.272 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:13.272 * Looking for test storage... 00:25:13.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:13.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.273 --rc genhtml_branch_coverage=1 00:25:13.273 --rc genhtml_function_coverage=1 00:25:13.273 --rc genhtml_legend=1 00:25:13.273 --rc geninfo_all_blocks=1 00:25:13.273 --rc geninfo_unexecuted_blocks=1 00:25:13.273 00:25:13.273 ' 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:13.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.273 --rc genhtml_branch_coverage=1 00:25:13.273 --rc genhtml_function_coverage=1 00:25:13.273 --rc genhtml_legend=1 00:25:13.273 --rc geninfo_all_blocks=1 00:25:13.273 --rc geninfo_unexecuted_blocks=1 00:25:13.273 00:25:13.273 ' 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:13.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.273 --rc genhtml_branch_coverage=1 00:25:13.273 --rc genhtml_function_coverage=1 00:25:13.273 --rc genhtml_legend=1 00:25:13.273 --rc geninfo_all_blocks=1 00:25:13.273 --rc geninfo_unexecuted_blocks=1 00:25:13.273 00:25:13.273 ' 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:13.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.273 --rc genhtml_branch_coverage=1 00:25:13.273 --rc genhtml_function_coverage=1 00:25:13.273 --rc genhtml_legend=1 00:25:13.273 --rc geninfo_all_blocks=1 00:25:13.273 --rc geninfo_unexecuted_blocks=1 00:25:13.273 00:25:13.273 ' 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.273 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:13.274 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:13.274 10:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:15.178 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:15.178 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:15.178 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:15.178 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:15.178 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:15.178 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:15.178 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:15.178 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:15.178 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:15.178 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:15.178 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:15.178 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:15.178 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:15.178 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:15.178 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:15.179 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:15.179 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:15.179 Found net devices under 0000:09:00.0: cvl_0_0 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:15.179 Found net devices under 0000:09:00.1: cvl_0_1 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:15.179 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:15.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:25:15.180 00:25:15.180 --- 10.0.0.2 ping statistics --- 00:25:15.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.180 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:15.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:25:15.180 00:25:15.180 --- 10.0.0.1 ping statistics --- 00:25:15.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.180 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:15.180 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:15.440 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:15.440 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:15.440 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:15.440 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:15.440 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1423107 00:25:15.440 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:15.440 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1423107 00:25:15.440 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1423107 ']' 00:25:15.440 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.440 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:15.440 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.440 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:15.440 10:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:15.440 [2024-11-19 10:53:02.868253] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:25:15.440 [2024-11-19 10:53:02.868367] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.440 [2024-11-19 10:53:02.942416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:15.440 [2024-11-19 10:53:03.000447] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.440 [2024-11-19 10:53:03.000504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.440 [2024-11-19 10:53:03.000533] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.440 [2024-11-19 10:53:03.000544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.440 [2024-11-19 10:53:03.000553] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.440 [2024-11-19 10:53:03.001887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.440 [2024-11-19 10:53:03.001893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.698 10:53:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:15.698 10:53:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:15.698 10:53:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:15.698 10:53:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:15.698 10:53:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:15.698 10:53:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.698 10:53:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1423107 00:25:15.698 10:53:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:15.956 [2024-11-19 10:53:03.442189] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.956 10:53:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:16.214 Malloc0 00:25:16.214 10:53:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:16.472 10:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:16.730 10:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:16.987 [2024-11-19 10:53:04.581739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.988 10:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:17.245 [2024-11-19 10:53:04.842438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:17.245 10:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1423391 00:25:17.245 10:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:17.245 10:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1423391 /var/tmp/bdevperf.sock 00:25:17.245 10:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:17.245 10:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1423391 ']' 00:25:17.245 10:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:17.245 10:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:17.245 10:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:17.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:17.245 10:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:17.245 10:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:17.811 10:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:17.811 10:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:17.811 10:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:18.068 10:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:18.634 Nvme0n1 00:25:18.634 10:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:18.892 Nvme0n1 00:25:18.892 10:53:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:18.892 10:53:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:20.792 10:53:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:20.792 10:53:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:21.359 10:53:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:21.618 10:53:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:22.551 10:53:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:22.551 10:53:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:22.551 10:53:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.551 10:53:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:22.809 10:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.809 10:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:22.809 10:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.809 10:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:23.067 10:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:23.067 10:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:23.068 10:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.068 10:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:23.326 10:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.326 10:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:23.326 10:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.326 10:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:23.584 10:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.584 10:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:23.584 10:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.584 10:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:23.842 10:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.842 10:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:23.842 10:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.842 10:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:24.100 10:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.100 10:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:24.100 10:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:24.358 10:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:24.924 10:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:25.857 10:53:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:25.857 10:53:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:25.857 10:53:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.857 10:53:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:26.115 10:53:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:26.115 10:53:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:26.115 10:53:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.115 10:53:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:26.372 10:53:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.372 10:53:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:26.372 10:53:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.372 10:53:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:26.630 10:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.630 10:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:26.630 10:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.631 10:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:26.889 10:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.889 10:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:26.889 10:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.889 10:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:27.148 10:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.148 10:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:27.148 10:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.148 10:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:27.406 10:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.406 10:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:27.406 10:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:27.665 10:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:27.922 10:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:28.856 10:53:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:28.856 10:53:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:28.857 10:53:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.857 10:53:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:29.115 10:53:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.115 10:53:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:29.373 10:53:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.373 10:53:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:29.631 10:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:29.631 10:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:29.631 10:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.631 10:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:29.889 10:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.889 10:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:29.889 10:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.889 10:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:30.147 10:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.147 10:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:30.147 10:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.147 10:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:30.405 10:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.405 10:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:30.405 10:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.405 10:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:30.663 10:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.663 10:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:30.663 10:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:30.921 10:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:31.179 10:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:32.127 10:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:32.127 10:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:32.127 10:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.127 10:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:32.386 10:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.386 10:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:32.386 10:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.386 10:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:32.644 10:53:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:32.644 10:53:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:32.644 10:53:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.644 10:53:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:32.901 10:53:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.902 10:53:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:32.902 10:53:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.902 10:53:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:33.467 10:53:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.467 10:53:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:33.467 10:53:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.467 10:53:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:33.467 10:53:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.467 10:53:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:33.467 10:53:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.467 10:53:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:33.725 10:53:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:33.725 10:53:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:33.725 10:53:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:34.290 10:53:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:34.290 10:53:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:35.661 10:53:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:35.661 10:53:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:35.661 10:53:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.661 10:53:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:35.661 10:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:35.661 10:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:35.661 10:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.661 10:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:35.919 10:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:35.919 10:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:35.919 10:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.919 10:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:36.177 10:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.177 10:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:36.177 10:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.177 10:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:36.435 10:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.435 10:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:36.435 10:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.435 10:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:36.693 10:53:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:36.693 10:53:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:36.693 10:53:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.693 10:53:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:36.951 10:53:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:36.951 10:53:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:36.951 10:53:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:37.209 10:53:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:37.495 10:53:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:38.901 10:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:38.901 10:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:38.901 10:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.901 10:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:38.901 10:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:38.901 10:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:38.901 10:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.901 10:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:39.188 10:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.188 10:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:39.188 10:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.188 10:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:39.447 10:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.447 10:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:39.447 10:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.447 10:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:39.705 10:53:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.705 10:53:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:39.705 10:53:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.705 10:53:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:39.963 10:53:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:39.963 10:53:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:39.963 10:53:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.963 10:53:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:40.221 10:53:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.221 10:53:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:40.479 10:53:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:40.479 10:53:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:40.803 10:53:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:41.061 10:53:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:41.996 10:53:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:41.996 10:53:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:41.996 10:53:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.996 10:53:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:42.254 10:53:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.254 10:53:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:42.254 10:53:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.254 10:53:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:42.512 10:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.512 10:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:42.512 10:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.512 10:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:42.770 10:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.770 10:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:42.770 10:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.770 10:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:43.336 10:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.336 10:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:43.336 10:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.336 10:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:43.336 10:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.336 10:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:43.336 10:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.336 10:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:43.594 10:53:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.594 10:53:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:43.594 10:53:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:43.852 10:53:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:44.417 10:53:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:45.352 10:53:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:45.352 10:53:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:45.352 10:53:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.352 10:53:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:45.610 10:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:45.610 10:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:45.610 10:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.610 10:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:45.868 10:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.868 10:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:45.868 10:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.868 10:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:46.126 10:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.126 10:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:46.126 10:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.126 10:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:46.385 10:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.385 10:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:46.385 10:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.385 10:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:46.643 10:53:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.643 10:53:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:46.643 10:53:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.643 10:53:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:46.900 10:53:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.900 10:53:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:46.900 10:53:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:47.158 10:53:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:47.415 10:53:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:48.788 10:53:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:48.788 10:53:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:48.788 10:53:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.788 10:53:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:48.788 10:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.788 10:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:48.788 10:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.788 10:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:49.047 10:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.047 10:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:49.047 10:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.047 10:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:49.305 10:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.305 10:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:49.305 10:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.305 10:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:49.563 10:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.563 10:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:49.563 10:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.563 10:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:49.820 10:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.820 10:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:49.820 10:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.820 10:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:50.077 10:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.077 10:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:50.077 10:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:50.335 10:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:50.593 10:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:51.968 10:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:51.968 10:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:51.968 10:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.968 10:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:51.968 10:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.968 10:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:51.968 10:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.968 10:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:52.226 10:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:52.226 10:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:52.226 10:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.226 10:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:52.484 10:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.484 10:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:52.484 10:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.484 10:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:52.742 10:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.742 10:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:52.742 10:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.742 10:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:53.000 10:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.000 10:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:53.000 10:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.000 10:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:53.258 10:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.258 10:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1423391 00:25:53.258 10:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1423391 ']' 00:25:53.258 10:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1423391 00:25:53.258 10:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:53.258 10:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:53.258 10:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1423391 00:25:53.529 10:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:53.529 10:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:53.529 10:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1423391' 00:25:53.529 killing process with pid 1423391 00:25:53.529 10:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1423391 00:25:53.529 10:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1423391 00:25:53.529 { 00:25:53.529 "results": [ 00:25:53.529 { 00:25:53.529 "job": "Nvme0n1", 00:25:53.529 "core_mask": "0x4", 00:25:53.529 "workload": "verify", 00:25:53.529 "status": "terminated", 00:25:53.529 "verify_range": { 00:25:53.529 "start": 0, 00:25:53.529 "length": 16384 00:25:53.529 }, 00:25:53.529 "queue_depth": 128, 00:25:53.529 "io_size": 4096, 00:25:53.529 "runtime": 34.366143, 00:25:53.529 "iops": 7946.1346593360795, 00:25:53.529 "mibps": 31.03958851303156, 00:25:53.529 "io_failed": 0, 00:25:53.529 "io_timeout": 0, 00:25:53.529 "avg_latency_us": 16066.292966638484, 00:25:53.529 "min_latency_us": 338.29925925925926, 00:25:53.529 "max_latency_us": 4026531.84 00:25:53.529 } 00:25:53.529 ], 00:25:53.529 "core_count": 1 00:25:53.529 } 00:25:53.529 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1423391 00:25:53.529 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:53.529 [2024-11-19 10:53:04.907757] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:25:53.529 [2024-11-19 10:53:04.907839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1423391 ] 00:25:53.529 [2024-11-19 10:53:04.975108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.529 [2024-11-19 10:53:05.033115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:53.529 Running I/O for 90 seconds... 00:25:53.529 8446.00 IOPS, 32.99 MiB/s [2024-11-19T09:53:41.152Z] 8542.50 IOPS, 33.37 MiB/s [2024-11-19T09:53:41.152Z] 8509.33 IOPS, 33.24 MiB/s [2024-11-19T09:53:41.152Z] 8494.75 IOPS, 33.18 MiB/s [2024-11-19T09:53:41.152Z] 8497.20 IOPS, 33.19 MiB/s [2024-11-19T09:53:41.152Z] 8529.50 IOPS, 33.32 MiB/s [2024-11-19T09:53:41.152Z] 8522.00 IOPS, 33.29 MiB/s [2024-11-19T09:53:41.152Z] 8506.75 IOPS, 33.23 MiB/s [2024-11-19T09:53:41.152Z] 8492.00 IOPS, 33.17 MiB/s [2024-11-19T09:53:41.152Z] 8489.70 IOPS, 33.16 MiB/s [2024-11-19T09:53:41.152Z] 8493.27 IOPS, 33.18 MiB/s [2024-11-19T09:53:41.152Z] 8494.25 IOPS, 33.18 MiB/s [2024-11-19T09:53:41.152Z] 8497.38 IOPS, 33.19 MiB/s [2024-11-19T09:53:41.152Z] 8495.21 IOPS, 33.18 MiB/s [2024-11-19T09:53:41.152Z] 8505.73 IOPS, 33.23 MiB/s [2024-11-19T09:53:41.152Z] [2024-11-19 10:53:21.602017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.529 [2024-11-19 10:53:21.602070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:53.529 [2024-11-19 10:53:21.602140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.529 [2024-11-19 10:53:21.602163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:53.529 [2024-11-19 10:53:21.602188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.529 [2024-11-19 10:53:21.602206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:53.529 [2024-11-19 10:53:21.602230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.529 [2024-11-19 10:53:21.602248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:53.529 [2024-11-19 10:53:21.602270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.529 [2024-11-19 10:53:21.602288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:53.529 [2024-11-19 10:53:21.602319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.529 [2024-11-19 10:53:21.602353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:53.529 [2024-11-19 10:53:21.602377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.529 [2024-11-19 10:53:21.602394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:53.529 [2024-11-19 10:53:21.602417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.529 [2024-11-19 10:53:21.602434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:53.529 [2024-11-19 10:53:21.603445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.529 [2024-11-19 10:53:21.603470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:53.529 [2024-11-19 10:53:21.603511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:109288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.529 [2024-11-19 10:53:21.603530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:53.529 [2024-11-19 10:53:21.603554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:109296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.529 [2024-11-19 10:53:21.603571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:53.529 [2024-11-19 10:53:21.603594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.529 [2024-11-19 10:53:21.603612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:53.529 [2024-11-19 10:53:21.603635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.529 [2024-11-19 10:53:21.603652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:53.529 [2024-11-19 10:53:21.603674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.529 [2024-11-19 10:53:21.603691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:53.529 [2024-11-19 10:53:21.603714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.529 [2024-11-19 10:53:21.603746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:53.529 [2024-11-19 10:53:21.603769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:109336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.529 [2024-11-19 10:53:21.603786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:53.529 [2024-11-19 10:53:21.603823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.529 [2024-11-19 10:53:21.603840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:53.529 [2024-11-19 10:53:21.603862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.529 [2024-11-19 10:53:21.603877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.603898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.603914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.603935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.603951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.603972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.603988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:108552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.604972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:108608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.604990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.605016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.605037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.605061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.605078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.605101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.605118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.605141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.530 [2024-11-19 10:53:21.605158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:53.530 [2024-11-19 10:53:21.605182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.605199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.605222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.605240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.605265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.605283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.605316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.605336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.605360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.605378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.605401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.605418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.605441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.605459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.605481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.605498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.605522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.605539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.605567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.605585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.605806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.605831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.605861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.605879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.605906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.605923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.605950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.605967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.605993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.606010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.606051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:108768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.606067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.606107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.606124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.606164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.606181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.606207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.606240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.606267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.606288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.606324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.606344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.606378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.606397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.606423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:108824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.606439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.606465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.531 [2024-11-19 10:53:21.606482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.606508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.606525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.606550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.606567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.606593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.606610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.606635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.606652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.606678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.606695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.606721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.606738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.606764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.606780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:53.531 [2024-11-19 10:53:21.606807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.531 [2024-11-19 10:53:21.606827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.606853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:108896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.606873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.606908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.606926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.606953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:108912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.606970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.606996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.607013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.607055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.607072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.607112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.607128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.607154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.607169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.607193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.607209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.607234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.607250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.607274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.607312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.607341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.607373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.607400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.607417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.607443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.607460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.607486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.607507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.607534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.607551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.607577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.607609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.607635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.607651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.607691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.607708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.607733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.607749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.607773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.607789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.607814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.607830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.607854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.607870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.607894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.607910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.607934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.607950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.607974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.607989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.608014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.608034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.608059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.608075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.608099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.608115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.608139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.608155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.608179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.608195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:53.532 [2024-11-19 10:53:21.608219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.532 [2024-11-19 10:53:21.608235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:21.608259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.533 [2024-11-19 10:53:21.608275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:21.608299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.533 [2024-11-19 10:53:21.608337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:21.608365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.533 [2024-11-19 10:53:21.608382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:21.608408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.533 [2024-11-19 10:53:21.608425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:21.608450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.533 [2024-11-19 10:53:21.608466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:21.608491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.533 [2024-11-19 10:53:21.608508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:21.608533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.533 [2024-11-19 10:53:21.608549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:21.608579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.533 [2024-11-19 10:53:21.608596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:21.608636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.533 [2024-11-19 10:53:21.608652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:21.608676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.533 [2024-11-19 10:53:21.608692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:21.608716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.533 [2024-11-19 10:53:21.608732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:21.608757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.533 [2024-11-19 10:53:21.608772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:21.608797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.533 [2024-11-19 10:53:21.608813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:21.608837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.533 [2024-11-19 10:53:21.608852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:21.608877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.533 [2024-11-19 10:53:21.608892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:21.608917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.533 [2024-11-19 10:53:21.608933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:53.533 8014.75 IOPS, 31.31 MiB/s [2024-11-19T09:53:41.156Z] 7543.29 IOPS, 29.47 MiB/s [2024-11-19T09:53:41.156Z] 7124.22 IOPS, 27.83 MiB/s [2024-11-19T09:53:41.156Z] 6749.26 IOPS, 26.36 MiB/s [2024-11-19T09:53:41.156Z] 6798.20 IOPS, 26.56 MiB/s [2024-11-19T09:53:41.156Z] 6868.76 IOPS, 26.83 MiB/s [2024-11-19T09:53:41.156Z] 6962.05 IOPS, 27.20 MiB/s [2024-11-19T09:53:41.156Z] 7128.74 IOPS, 27.85 MiB/s [2024-11-19T09:53:41.156Z] 7284.58 IOPS, 28.46 MiB/s [2024-11-19T09:53:41.156Z] 7439.00 IOPS, 29.06 MiB/s [2024-11-19T09:53:41.156Z] 7474.15 IOPS, 29.20 MiB/s [2024-11-19T09:53:41.156Z] 7509.74 IOPS, 29.33 MiB/s [2024-11-19T09:53:41.156Z] 7538.93 IOPS, 29.45 MiB/s [2024-11-19T09:53:41.156Z] 7613.41 IOPS, 29.74 MiB/s [2024-11-19T09:53:41.156Z] 7737.17 IOPS, 30.22 MiB/s [2024-11-19T09:53:41.156Z] 7857.03 IOPS, 30.69 MiB/s [2024-11-19T09:53:41.156Z] [2024-11-19 10:53:38.161822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:34616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.533 [2024-11-19 10:53:38.161900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:38.161938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.533 [2024-11-19 10:53:38.161968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:38.162671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.533 [2024-11-19 10:53:38.162697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:38.162724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.533 [2024-11-19 10:53:38.162742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:38.162764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.533 [2024-11-19 10:53:38.162782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:38.162804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.533 [2024-11-19 10:53:38.162821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:38.162844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.533 [2024-11-19 10:53:38.162861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:38.162883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.533 [2024-11-19 10:53:38.162900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:38.162922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:34872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.533 [2024-11-19 10:53:38.162939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:38.162978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.533 [2024-11-19 10:53:38.162995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:38.163016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.533 [2024-11-19 10:53:38.163033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:38.163055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.533 [2024-11-19 10:53:38.163072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.533 [2024-11-19 10:53:38.163094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.533 [2024-11-19 10:53:38.163110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.163132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.534 [2024-11-19 10:53:38.163154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.163178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.534 [2024-11-19 10:53:38.163195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.163217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.534 [2024-11-19 10:53:38.163234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.163256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.534 [2024-11-19 10:53:38.163272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.163294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.534 [2024-11-19 10:53:38.163334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.163359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.534 [2024-11-19 10:53:38.163376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.163399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.534 [2024-11-19 10:53:38.163416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.163437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.534 [2024-11-19 10:53:38.163454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.163476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.534 [2024-11-19 10:53:38.163492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.163514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:34680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.534 [2024-11-19 10:53:38.163531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.163554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:34712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.534 [2024-11-19 10:53:38.163570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.163592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.534 [2024-11-19 10:53:38.163609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.163647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.534 [2024-11-19 10:53:38.163663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.163689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.534 [2024-11-19 10:53:38.163706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.163728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.534 [2024-11-19 10:53:38.163744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.163766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.534 [2024-11-19 10:53:38.163782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.163803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.534 [2024-11-19 10:53:38.163820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.163841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.534 [2024-11-19 10:53:38.163857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.163878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.534 [2024-11-19 10:53:38.163894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.163916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.534 [2024-11-19 10:53:38.163932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.163954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.534 [2024-11-19 10:53:38.163970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.163992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.534 [2024-11-19 10:53:38.164008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.164030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.534 [2024-11-19 10:53:38.164046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.164068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.534 [2024-11-19 10:53:38.164084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.164105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.534 [2024-11-19 10:53:38.164122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.164147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.534 [2024-11-19 10:53:38.164164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:53.534 [2024-11-19 10:53:38.164186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.535 [2024-11-19 10:53:38.164203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.164797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.535 [2024-11-19 10:53:38.164821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.164848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.535 [2024-11-19 10:53:38.164867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.164890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.535 [2024-11-19 10:53:38.164907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.164930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.535 [2024-11-19 10:53:38.164946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.164968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.535 [2024-11-19 10:53:38.164985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.165008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.535 [2024-11-19 10:53:38.165025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.165048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.535 [2024-11-19 10:53:38.165065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.165087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.535 [2024-11-19 10:53:38.165104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.165126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.535 [2024-11-19 10:53:38.165142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.165164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.535 [2024-11-19 10:53:38.165181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.165203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.535 [2024-11-19 10:53:38.165241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.165265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.535 [2024-11-19 10:53:38.165281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.165308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.535 [2024-11-19 10:53:38.165342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.165367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.535 [2024-11-19 10:53:38.165384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.165406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.535 [2024-11-19 10:53:38.165423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.165445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.535 [2024-11-19 10:53:38.165461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.165483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.535 [2024-11-19 10:53:38.165500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.165522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.535 [2024-11-19 10:53:38.165539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.165561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:35112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.535 [2024-11-19 10:53:38.165578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.165600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.535 [2024-11-19 10:53:38.165632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.165655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.535 [2024-11-19 10:53:38.165671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.166479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.535 [2024-11-19 10:53:38.166504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.166531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.535 [2024-11-19 10:53:38.166555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.166580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.535 [2024-11-19 10:53:38.166597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.166619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.535 [2024-11-19 10:53:38.166636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.166658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.535 [2024-11-19 10:53:38.166676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.166698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.535 [2024-11-19 10:53:38.166714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.166737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.535 [2024-11-19 10:53:38.166753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.166776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.535 [2024-11-19 10:53:38.166793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.166831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.535 [2024-11-19 10:53:38.166848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:53.535 [2024-11-19 10:53:38.166870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.535 [2024-11-19 10:53:38.166886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.166907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:34760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.536 [2024-11-19 10:53:38.166923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.166945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.166961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.166982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.166998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.167020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.167036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.167062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.167079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.167101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.167117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.167139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.167156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.167177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.167193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.167214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.536 [2024-11-19 10:53:38.167247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.167270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.167286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.167316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.167335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.167357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.167373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.167395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.167412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.167434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.167451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.167474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.167490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.169238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.169265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.169298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.169329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.169354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.169371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.169394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.169411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.169433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:34712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.536 [2024-11-19 10:53:38.169451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.169473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.169490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.169513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.169529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.169551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.169568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.169590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.169607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.169629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.169646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.169669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.169686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.169708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.169725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.169748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.536 [2024-11-19 10:53:38.169765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.169787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.536 [2024-11-19 10:53:38.169808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.169831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.536 [2024-11-19 10:53:38.169848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.169886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.536 [2024-11-19 10:53:38.169902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:53.536 [2024-11-19 10:53:38.169924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.537 [2024-11-19 10:53:38.169940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.169961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:35336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.537 [2024-11-19 10:53:38.169978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.169999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.537 [2024-11-19 10:53:38.170015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.170036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.537 [2024-11-19 10:53:38.170053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.170075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.537 [2024-11-19 10:53:38.170091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.170112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.537 [2024-11-19 10:53:38.170128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.170150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.537 [2024-11-19 10:53:38.170166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.170188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.537 [2024-11-19 10:53:38.170204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.170225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.537 [2024-11-19 10:53:38.170240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.170262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.537 [2024-11-19 10:53:38.170297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.170331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.537 [2024-11-19 10:53:38.170349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.170372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.537 [2024-11-19 10:53:38.170389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.170412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.537 [2024-11-19 10:53:38.170428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.170451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.537 [2024-11-19 10:53:38.170467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.170489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.537 [2024-11-19 10:53:38.170507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.170529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.537 [2024-11-19 10:53:38.170545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.170567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.537 [2024-11-19 10:53:38.170584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.170606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.537 [2024-11-19 10:53:38.170624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.170661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.537 [2024-11-19 10:53:38.170678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.170700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.537 [2024-11-19 10:53:38.170716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.170738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.537 [2024-11-19 10:53:38.170755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.170776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.537 [2024-11-19 10:53:38.170792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.170818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.537 [2024-11-19 10:53:38.170836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.170857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.537 [2024-11-19 10:53:38.170873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.170895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.537 [2024-11-19 10:53:38.170911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.170933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.537 [2024-11-19 10:53:38.170950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.173140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.537 [2024-11-19 10:53:38.173168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.173197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.537 [2024-11-19 10:53:38.173216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.173240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.537 [2024-11-19 10:53:38.173257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.173280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.537 [2024-11-19 10:53:38.173298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.173333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.537 [2024-11-19 10:53:38.173351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:53.537 [2024-11-19 10:53:38.173374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.537 [2024-11-19 10:53:38.173391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.173414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.538 [2024-11-19 10:53:38.173431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.173454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.538 [2024-11-19 10:53:38.173470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.173501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.538 [2024-11-19 10:53:38.173519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.173542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.538 [2024-11-19 10:53:38.173559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.173582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.538 [2024-11-19 10:53:38.173599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.173636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.538 [2024-11-19 10:53:38.173653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.173676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.538 [2024-11-19 10:53:38.173709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.173732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.538 [2024-11-19 10:53:38.173749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.173771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.538 [2024-11-19 10:53:38.173788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.173811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.538 [2024-11-19 10:53:38.173827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.173850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.538 [2024-11-19 10:53:38.173866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.174083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.538 [2024-11-19 10:53:38.174106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.174132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.538 [2024-11-19 10:53:38.174149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.174172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:34904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.538 [2024-11-19 10:53:38.174189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.174211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.538 [2024-11-19 10:53:38.174233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.174256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.538 [2024-11-19 10:53:38.174273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.174295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.538 [2024-11-19 10:53:38.174321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.174345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.538 [2024-11-19 10:53:38.174362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.174384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.538 [2024-11-19 10:53:38.174401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.174424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.538 [2024-11-19 10:53:38.174441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.174463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.538 [2024-11-19 10:53:38.174479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.174501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.538 [2024-11-19 10:53:38.174518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.174540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.538 [2024-11-19 10:53:38.174557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.174593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.538 [2024-11-19 10:53:38.174610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.174633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.538 [2024-11-19 10:53:38.174649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.174671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.538 [2024-11-19 10:53:38.174687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.174709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.538 [2024-11-19 10:53:38.174729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.174752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.538 [2024-11-19 10:53:38.174769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.174808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.538 [2024-11-19 10:53:38.174825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:53.538 [2024-11-19 10:53:38.174848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.538 [2024-11-19 10:53:38.174865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.174888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.539 [2024-11-19 10:53:38.174904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.174926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.539 [2024-11-19 10:53:38.174943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.174965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:35176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.539 [2024-11-19 10:53:38.174982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.175005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.539 [2024-11-19 10:53:38.175021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.175043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.539 [2024-11-19 10:53:38.175060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.175082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.539 [2024-11-19 10:53:38.175099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.175121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.539 [2024-11-19 10:53:38.175138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.175160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.539 [2024-11-19 10:53:38.175178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.175200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.539 [2024-11-19 10:53:38.175217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.175260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.539 [2024-11-19 10:53:38.175277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.175299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.539 [2024-11-19 10:53:38.175339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.175364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.539 [2024-11-19 10:53:38.175381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.175403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:35376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.539 [2024-11-19 10:53:38.175421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.175443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.539 [2024-11-19 10:53:38.175460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.175482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.539 [2024-11-19 10:53:38.175498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.175521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.539 [2024-11-19 10:53:38.175537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.175560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.539 [2024-11-19 10:53:38.175577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.175600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.539 [2024-11-19 10:53:38.175632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.176469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.539 [2024-11-19 10:53:38.176493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.176520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.539 [2024-11-19 10:53:38.176539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.176563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.539 [2024-11-19 10:53:38.176580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.176609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.539 [2024-11-19 10:53:38.176626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.176649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.539 [2024-11-19 10:53:38.176666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.176689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.539 [2024-11-19 10:53:38.176706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.176728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.539 [2024-11-19 10:53:38.176745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.176767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.539 [2024-11-19 10:53:38.176784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.176807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.539 [2024-11-19 10:53:38.176838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.176862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.539 [2024-11-19 10:53:38.176879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.176901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.539 [2024-11-19 10:53:38.176917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:53.539 [2024-11-19 10:53:38.176938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.539 [2024-11-19 10:53:38.176955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.176977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.540 [2024-11-19 10:53:38.176994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.177015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.540 [2024-11-19 10:53:38.177032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.177053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.540 [2024-11-19 10:53:38.177069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.177095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.540 [2024-11-19 10:53:38.177112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.177134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.540 [2024-11-19 10:53:38.177150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.177171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.540 [2024-11-19 10:53:38.177188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.177786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.540 [2024-11-19 10:53:38.177810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.177837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.540 [2024-11-19 10:53:38.177856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.177879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.540 [2024-11-19 10:53:38.177896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.177918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.540 [2024-11-19 10:53:38.177935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.177973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.540 [2024-11-19 10:53:38.177989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.178010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.540 [2024-11-19 10:53:38.178025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.178062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.540 [2024-11-19 10:53:38.178079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.178102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.540 [2024-11-19 10:53:38.178119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.178140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.540 [2024-11-19 10:53:38.178157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.178178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.540 [2024-11-19 10:53:38.178200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.178223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.540 [2024-11-19 10:53:38.178239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.178261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.540 [2024-11-19 10:53:38.178277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.178325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.540 [2024-11-19 10:53:38.178344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.178366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.540 [2024-11-19 10:53:38.178383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.178405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:35176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.540 [2024-11-19 10:53:38.178421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.178444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.540 [2024-11-19 10:53:38.178461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.178484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.540 [2024-11-19 10:53:38.178500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.178522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.540 [2024-11-19 10:53:38.178540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.178563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.540 [2024-11-19 10:53:38.178580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.178602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.540 [2024-11-19 10:53:38.178619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:53.540 [2024-11-19 10:53:38.178641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.540 [2024-11-19 10:53:38.178658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.178682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.541 [2024-11-19 10:53:38.178704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.541 [2024-11-19 10:53:38.180055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.541 [2024-11-19 10:53:38.180117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.541 [2024-11-19 10:53:38.180172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.541 [2024-11-19 10:53:38.180211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.541 [2024-11-19 10:53:38.180249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.541 [2024-11-19 10:53:38.180288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.541 [2024-11-19 10:53:38.180337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.541 [2024-11-19 10:53:38.180376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.541 [2024-11-19 10:53:38.180414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.541 [2024-11-19 10:53:38.180453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.541 [2024-11-19 10:53:38.180491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.541 [2024-11-19 10:53:38.180529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.541 [2024-11-19 10:53:38.180575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.541 [2024-11-19 10:53:38.180627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.541 [2024-11-19 10:53:38.180665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:35680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.541 [2024-11-19 10:53:38.180701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.541 [2024-11-19 10:53:38.180737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.541 [2024-11-19 10:53:38.180773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.541 [2024-11-19 10:53:38.180808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.541 [2024-11-19 10:53:38.180844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.541 [2024-11-19 10:53:38.180881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.541 [2024-11-19 10:53:38.180917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.541 [2024-11-19 10:53:38.180953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.180974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.541 [2024-11-19 10:53:38.180990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.181014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.541 [2024-11-19 10:53:38.181031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.181052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.541 [2024-11-19 10:53:38.181067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.181088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.541 [2024-11-19 10:53:38.181103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:53.541 [2024-11-19 10:53:38.181125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.541 [2024-11-19 10:53:38.181141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.184022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.542 [2024-11-19 10:53:38.184048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.184090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.542 [2024-11-19 10:53:38.184109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.184133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.542 [2024-11-19 10:53:38.184164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.184186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.542 [2024-11-19 10:53:38.184203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.184224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.542 [2024-11-19 10:53:38.184241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.184280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.542 [2024-11-19 10:53:38.184297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.184329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.542 [2024-11-19 10:53:38.184346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.184368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.542 [2024-11-19 10:53:38.184385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.184407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.542 [2024-11-19 10:53:38.184428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.184451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.542 [2024-11-19 10:53:38.184469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.184491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.542 [2024-11-19 10:53:38.184508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.184530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.542 [2024-11-19 10:53:38.184547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.184569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:35944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.542 [2024-11-19 10:53:38.184586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.184608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.542 [2024-11-19 10:53:38.184624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.184646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.542 [2024-11-19 10:53:38.184663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.184685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.542 [2024-11-19 10:53:38.184702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.184724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.542 [2024-11-19 10:53:38.184740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.184762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.542 [2024-11-19 10:53:38.184779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.184802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.542 [2024-11-19 10:53:38.184818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.184840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.542 [2024-11-19 10:53:38.184857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.184879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.542 [2024-11-19 10:53:38.184899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.184922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.542 [2024-11-19 10:53:38.184939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.184961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.542 [2024-11-19 10:53:38.184978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.185001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.542 [2024-11-19 10:53:38.185031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.185053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.542 [2024-11-19 10:53:38.185069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.185090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.542 [2024-11-19 10:53:38.185106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.185126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.542 [2024-11-19 10:53:38.185142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.185163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.542 [2024-11-19 10:53:38.185179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.185200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.542 [2024-11-19 10:53:38.185216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.185236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.542 [2024-11-19 10:53:38.185252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:53.542 [2024-11-19 10:53:38.185272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.542 [2024-11-19 10:53:38.185310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.185337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.543 [2024-11-19 10:53:38.185355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.185377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.543 [2024-11-19 10:53:38.185394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.185421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.543 [2024-11-19 10:53:38.185439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.185461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.543 [2024-11-19 10:53:38.185478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.185500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.543 [2024-11-19 10:53:38.185518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.185540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:36256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.543 [2024-11-19 10:53:38.185558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.185580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:36272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.543 [2024-11-19 10:53:38.185612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.185635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.543 [2024-11-19 10:53:38.185651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.185688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.543 [2024-11-19 10:53:38.185703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.185724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.543 [2024-11-19 10:53:38.185740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.185762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.543 [2024-11-19 10:53:38.185778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.187594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.543 [2024-11-19 10:53:38.187621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.187649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.543 [2024-11-19 10:53:38.187668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.187690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.543 [2024-11-19 10:53:38.187707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.187736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.543 [2024-11-19 10:53:38.187754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.187777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:36296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.543 [2024-11-19 10:53:38.187793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.187815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.543 [2024-11-19 10:53:38.187832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.187855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.543 [2024-11-19 10:53:38.187872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.187894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.543 [2024-11-19 10:53:38.187910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.187932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.543 [2024-11-19 10:53:38.187949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.187972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:36376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.543 [2024-11-19 10:53:38.187989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.188011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.543 [2024-11-19 10:53:38.188028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.188050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.543 [2024-11-19 10:53:38.188067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.188089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.543 [2024-11-19 10:53:38.188106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.188128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.543 [2024-11-19 10:53:38.188145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.188168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.543 [2024-11-19 10:53:38.188185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.188207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.543 [2024-11-19 10:53:38.188242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.188266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.543 [2024-11-19 10:53:38.188283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.188333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:35856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.543 [2024-11-19 10:53:38.188351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.188373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.543 [2024-11-19 10:53:38.188389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:53.543 [2024-11-19 10:53:38.188412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.544 [2024-11-19 10:53:38.188429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.188451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.544 [2024-11-19 10:53:38.188467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.188489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.544 [2024-11-19 10:53:38.188505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.188528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.544 [2024-11-19 10:53:38.188544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.188567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.544 [2024-11-19 10:53:38.188583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.188620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.544 [2024-11-19 10:53:38.188636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.188660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.544 [2024-11-19 10:53:38.188676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.189314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.544 [2024-11-19 10:53:38.189339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.189365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.544 [2024-11-19 10:53:38.189391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.189416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.544 [2024-11-19 10:53:38.189433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.189455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.544 [2024-11-19 10:53:38.189472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.189495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:36240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.544 [2024-11-19 10:53:38.189512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.189534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.544 [2024-11-19 10:53:38.189551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.189573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.544 [2024-11-19 10:53:38.189590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.189612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.544 [2024-11-19 10:53:38.189629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.189652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.544 [2024-11-19 10:53:38.189669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.189691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:36440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.544 [2024-11-19 10:53:38.189708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.189730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.544 [2024-11-19 10:53:38.189747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.189768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.544 [2024-11-19 10:53:38.189785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.189807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.544 [2024-11-19 10:53:38.189824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.189847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.544 [2024-11-19 10:53:38.189867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.189906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.544 [2024-11-19 10:53:38.189924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.189946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.544 [2024-11-19 10:53:38.189962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.189984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:35992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.544 [2024-11-19 10:53:38.190000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.190021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:36024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.544 [2024-11-19 10:53:38.190053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.190075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.544 [2024-11-19 10:53:38.190090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.190111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.544 [2024-11-19 10:53:38.190127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.190148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.544 [2024-11-19 10:53:38.190164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.191862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:35888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.544 [2024-11-19 10:53:38.191888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.191916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.544 [2024-11-19 10:53:38.191934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.191957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.544 [2024-11-19 10:53:38.191974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:53.544 [2024-11-19 10:53:38.191996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.544 [2024-11-19 10:53:38.192014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:53.545 [2024-11-19 10:53:38.192036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.545 [2024-11-19 10:53:38.192053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:53.545 [2024-11-19 10:53:38.192081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:36344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.545 [2024-11-19 10:53:38.192099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:53.545 [2024-11-19 10:53:38.192122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:36376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.545 [2024-11-19 10:53:38.192139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:53.545 [2024-11-19 10:53:38.192161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.545 [2024-11-19 10:53:38.192192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:53.545 [2024-11-19 10:53:38.192214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.545 [2024-11-19 10:53:38.192229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:53.545 [2024-11-19 10:53:38.192250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.545 [2024-11-19 10:53:38.192266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:53.545 [2024-11-19 10:53:38.192310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:35856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.545 [2024-11-19 10:53:38.192330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:53.545 [2024-11-19 10:53:38.192353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.545 [2024-11-19 10:53:38.192370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:53.545 [2024-11-19 10:53:38.192392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.545 [2024-11-19 10:53:38.192409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:53.545 [2024-11-19 10:53:38.192431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.545 [2024-11-19 10:53:38.192447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:53.545 [2024-11-19 10:53:38.192470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.545 [2024-11-19 10:53:38.192486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:53.545 [2024-11-19 10:53:38.192509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.545 [2024-11-19 10:53:38.192525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:53.545 [2024-11-19 10:53:38.192547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.545 [2024-11-19 10:53:38.192564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:53.545 [2024-11-19 10:53:38.192590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.545 [2024-11-19 10:53:38.192623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:53.545 [2024-11-19 10:53:38.192644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.545 [2024-11-19 10:53:38.192659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:53.545 [2024-11-19 10:53:38.192680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.545 [2024-11-19 10:53:38.192696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:53.545 [2024-11-19 10:53:38.192717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.545 [2024-11-19 10:53:38.192732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:53.545 [2024-11-19 10:53:38.192753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.545 [2024-11-19 10:53:38.192768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:53.545 [2024-11-19 10:53:38.192789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:36272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.545 [2024-11-19 10:53:38.192805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:53.545 [2024-11-19 10:53:38.192825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.545 [2024-11-19 10:53:38.192841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:53.545 [2024-11-19 10:53:38.192861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:36440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.545 [2024-11-19 10:53:38.192877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.545 [2024-11-19 10:53:38.192898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:36472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.545 [2024-11-19 10:53:38.192914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.545 [2024-11-19 10:53:38.192935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.545 [2024-11-19 10:53:38.192950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.192971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.546 [2024-11-19 10:53:38.192987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.193008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:36024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.546 [2024-11-19 10:53:38.193023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.193045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.546 [2024-11-19 10:53:38.193064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.196075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.546 [2024-11-19 10:53:38.196100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.196140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:36568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.546 [2024-11-19 10:53:38.196158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.196195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.546 [2024-11-19 10:53:38.196212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.196235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.546 [2024-11-19 10:53:38.196251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.196272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.546 [2024-11-19 10:53:38.196312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.196343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.546 [2024-11-19 10:53:38.196360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.196382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:36648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.546 [2024-11-19 10:53:38.196399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.196422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.546 [2024-11-19 10:53:38.196439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.196461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.546 [2024-11-19 10:53:38.196477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.196499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.546 [2024-11-19 10:53:38.196516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.196538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.546 [2024-11-19 10:53:38.196554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.196577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.546 [2024-11-19 10:53:38.196605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.196629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.546 [2024-11-19 10:53:38.196646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.196668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.546 [2024-11-19 10:53:38.196684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.196706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:36000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.546 [2024-11-19 10:53:38.196723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.196745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.546 [2024-11-19 10:53:38.196762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.196784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.546 [2024-11-19 10:53:38.196800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.196822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.546 [2024-11-19 10:53:38.196839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.196861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.546 [2024-11-19 10:53:38.196878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.196900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:35984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.546 [2024-11-19 10:53:38.196931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.196954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.546 [2024-11-19 10:53:38.196970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.196991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.546 [2024-11-19 10:53:38.197007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.197028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.546 [2024-11-19 10:53:38.197061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.197084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.546 [2024-11-19 10:53:38.197101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.197127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.546 [2024-11-19 10:53:38.197145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.197167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.546 [2024-11-19 10:53:38.197184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.197206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.546 [2024-11-19 10:53:38.197222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.546 [2024-11-19 10:53:38.197245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.546 [2024-11-19 10:53:38.197261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.197283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.547 [2024-11-19 10:53:38.197300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.197333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.547 [2024-11-19 10:53:38.197350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.197373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.547 [2024-11-19 10:53:38.197390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.197412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.547 [2024-11-19 10:53:38.197428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.197450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.547 [2024-11-19 10:53:38.197467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.197489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.547 [2024-11-19 10:53:38.197506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.197528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.547 [2024-11-19 10:53:38.197544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.197566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.547 [2024-11-19 10:53:38.197583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.197625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.547 [2024-11-19 10:53:38.197643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.197678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.547 [2024-11-19 10:53:38.197695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.197716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.547 [2024-11-19 10:53:38.197732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.197752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:36512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.547 [2024-11-19 10:53:38.197768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.197790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.547 [2024-11-19 10:53:38.197806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.197827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.547 [2024-11-19 10:53:38.197843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.197864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:36768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.547 [2024-11-19 10:53:38.197880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.197900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:36784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.547 [2024-11-19 10:53:38.197916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.197937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.547 [2024-11-19 10:53:38.197953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.198745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.547 [2024-11-19 10:53:38.198769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.198796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.547 [2024-11-19 10:53:38.198830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.198853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.547 [2024-11-19 10:53:38.198870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.198907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.547 [2024-11-19 10:53:38.198928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.198950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.547 [2024-11-19 10:53:38.198966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.198988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:36832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.547 [2024-11-19 10:53:38.199004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.199025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.547 [2024-11-19 10:53:38.199040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.199061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.547 [2024-11-19 10:53:38.199092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.199115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.547 [2024-11-19 10:53:38.199132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.199153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.547 [2024-11-19 10:53:38.199186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.199649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:36152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.547 [2024-11-19 10:53:38.199687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.199715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.547 [2024-11-19 10:53:38.199732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.199771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:36520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.547 [2024-11-19 10:53:38.199789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.199812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.547 [2024-11-19 10:53:38.199829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.547 [2024-11-19 10:53:38.199851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.548 [2024-11-19 10:53:38.199884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.199907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.548 [2024-11-19 10:53:38.199936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.199959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:36968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.548 [2024-11-19 10:53:38.199976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.199997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.548 [2024-11-19 10:53:38.200013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.200034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.548 [2024-11-19 10:53:38.200066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.200088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.548 [2024-11-19 10:53:38.200104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.200506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:36600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.548 [2024-11-19 10:53:38.200532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.200559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.548 [2024-11-19 10:53:38.200578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.200601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.548 [2024-11-19 10:53:38.200618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.200641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.548 [2024-11-19 10:53:38.200657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.200679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.548 [2024-11-19 10:53:38.200696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.200719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.548 [2024-11-19 10:53:38.200736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.200758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.548 [2024-11-19 10:53:38.200775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.200797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.548 [2024-11-19 10:53:38.200813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.200842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.548 [2024-11-19 10:53:38.200859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.200882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.548 [2024-11-19 10:53:38.200899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.200921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.548 [2024-11-19 10:53:38.200953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.200976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.548 [2024-11-19 10:53:38.200992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.201013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.548 [2024-11-19 10:53:38.201045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.201069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.548 [2024-11-19 10:53:38.201102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.202915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.548 [2024-11-19 10:53:38.202941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.202968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.548 [2024-11-19 10:53:38.202987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.203009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.548 [2024-11-19 10:53:38.203026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.203049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.548 [2024-11-19 10:53:38.203065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.203088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:36512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.548 [2024-11-19 10:53:38.203105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.203127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.548 [2024-11-19 10:53:38.203160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.203188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:36784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.548 [2024-11-19 10:53:38.203220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.203242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.548 [2024-11-19 10:53:38.203257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.203278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.548 [2024-11-19 10:53:38.203318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.203342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.548 [2024-11-19 10:53:38.203374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.203398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.548 [2024-11-19 10:53:38.203414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:53.548 [2024-11-19 10:53:38.203436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.548 [2024-11-19 10:53:38.203452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.203475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.549 [2024-11-19 10:53:38.203492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.203514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.549 [2024-11-19 10:53:38.203530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.203552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.549 [2024-11-19 10:53:38.203568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.203606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.549 [2024-11-19 10:53:38.203622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.203644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:36896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.549 [2024-11-19 10:53:38.203675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.203696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.549 [2024-11-19 10:53:38.203712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.203733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.549 [2024-11-19 10:53:38.203753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.203775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.549 [2024-11-19 10:53:38.203790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.203811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.549 [2024-11-19 10:53:38.203827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.203847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.549 [2024-11-19 10:53:38.203863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.203883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.549 [2024-11-19 10:53:38.203899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.203920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.549 [2024-11-19 10:53:38.203936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.203956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.549 [2024-11-19 10:53:38.203971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.203992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.549 [2024-11-19 10:53:38.204008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.204028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.549 [2024-11-19 10:53:38.204044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.204065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.549 [2024-11-19 10:53:38.204080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.204101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:36808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.549 [2024-11-19 10:53:38.204116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.204137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.549 [2024-11-19 10:53:38.204152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.204173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.549 [2024-11-19 10:53:38.204192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.204214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.549 [2024-11-19 10:53:38.204230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.204251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.549 [2024-11-19 10:53:38.204267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.204310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.549 [2024-11-19 10:53:38.204328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.204366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.549 [2024-11-19 10:53:38.204384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.204408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.549 [2024-11-19 10:53:38.204425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.207216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.549 [2024-11-19 10:53:38.207243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.207271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.549 [2024-11-19 10:53:38.207290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.207320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:36904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.549 [2024-11-19 10:53:38.207339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.207362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.549 [2024-11-19 10:53:38.207380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.207402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.549 [2024-11-19 10:53:38.207419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.207440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.549 [2024-11-19 10:53:38.207458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:53.549 [2024-11-19 10:53:38.207480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.550 [2024-11-19 10:53:38.207502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.207526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.550 [2024-11-19 10:53:38.207542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.207565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.550 [2024-11-19 10:53:38.207583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.207620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.550 [2024-11-19 10:53:38.207636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.207658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.550 [2024-11-19 10:53:38.207674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.207696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.550 [2024-11-19 10:53:38.207726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.207748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.550 [2024-11-19 10:53:38.207763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.207783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:37176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.550 [2024-11-19 10:53:38.207799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.207820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.550 [2024-11-19 10:53:38.207835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.207856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.550 [2024-11-19 10:53:38.207871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.207892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.550 [2024-11-19 10:53:38.207907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.207928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.550 [2024-11-19 10:53:38.207943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.207964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:36992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.550 [2024-11-19 10:53:38.207979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.208007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.550 [2024-11-19 10:53:38.208024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.208045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.550 [2024-11-19 10:53:38.208076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.208099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.550 [2024-11-19 10:53:38.208116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.208138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.550 [2024-11-19 10:53:38.208155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.208178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.550 [2024-11-19 10:53:38.208195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.208217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.550 [2024-11-19 10:53:38.208234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.208256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.550 [2024-11-19 10:53:38.208273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.208295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.550 [2024-11-19 10:53:38.208321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.208345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.550 [2024-11-19 10:53:38.208361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.208384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.550 [2024-11-19 10:53:38.208400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.208423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.550 [2024-11-19 10:53:38.208439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.208461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.550 [2024-11-19 10:53:38.208478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.208504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.550 [2024-11-19 10:53:38.208522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.208544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.550 [2024-11-19 10:53:38.208560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.208582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.550 [2024-11-19 10:53:38.208599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.208621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.550 [2024-11-19 10:53:38.208638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.208660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.550 [2024-11-19 10:53:38.208676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.208698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.550 [2024-11-19 10:53:38.208715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:53.550 [2024-11-19 10:53:38.208738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:36344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.550 [2024-11-19 10:53:38.208754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.208776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.551 [2024-11-19 10:53:38.208808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.208831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.551 [2024-11-19 10:53:38.208862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.208884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.551 [2024-11-19 10:53:38.208900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.208921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.551 [2024-11-19 10:53:38.208937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.208958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.551 [2024-11-19 10:53:38.208974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.208995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.551 [2024-11-19 10:53:38.209015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.209037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.551 [2024-11-19 10:53:38.209053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.209074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.551 [2024-11-19 10:53:38.209090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.210034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.551 [2024-11-19 10:53:38.210074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.210102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.551 [2024-11-19 10:53:38.210120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.210159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.551 [2024-11-19 10:53:38.210176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.210199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.551 [2024-11-19 10:53:38.210216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.210238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.551 [2024-11-19 10:53:38.210255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.210278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:37328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.551 [2024-11-19 10:53:38.210294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.210797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.551 [2024-11-19 10:53:38.210838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.210866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.551 [2024-11-19 10:53:38.210884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.210906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.551 [2024-11-19 10:53:38.210923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.210960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.551 [2024-11-19 10:53:38.210981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.211019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.551 [2024-11-19 10:53:38.211037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.211058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.551 [2024-11-19 10:53:38.211075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.211097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.551 [2024-11-19 10:53:38.211128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.211158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.551 [2024-11-19 10:53:38.211175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.211197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.551 [2024-11-19 10:53:38.211217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.211240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.551 [2024-11-19 10:53:38.211257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.211279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.551 [2024-11-19 10:53:38.211295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:53.551 [2024-11-19 10:53:38.211327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.551 [2024-11-19 10:53:38.211345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.211368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.552 [2024-11-19 10:53:38.211384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.211406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.552 [2024-11-19 10:53:38.211422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.211445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.552 [2024-11-19 10:53:38.211461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.211483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.552 [2024-11-19 10:53:38.211499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.211527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.552 [2024-11-19 10:53:38.211545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.212989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.552 [2024-11-19 10:53:38.213015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.552 [2024-11-19 10:53:38.213061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.552 [2024-11-19 10:53:38.213102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.552 [2024-11-19 10:53:38.213141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.552 [2024-11-19 10:53:38.213184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.552 [2024-11-19 10:53:38.213224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.552 [2024-11-19 10:53:38.213266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.552 [2024-11-19 10:53:38.213313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.552 [2024-11-19 10:53:38.213360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.552 [2024-11-19 10:53:38.213398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.552 [2024-11-19 10:53:38.213437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.552 [2024-11-19 10:53:38.213482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:36344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.552 [2024-11-19 10:53:38.213521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.552 [2024-11-19 10:53:38.213559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.552 [2024-11-19 10:53:38.213597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.552 [2024-11-19 10:53:38.213636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:37232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.552 [2024-11-19 10:53:38.213680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.552 [2024-11-19 10:53:38.213721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.552 [2024-11-19 10:53:38.213761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.552 [2024-11-19 10:53:38.213799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.552 [2024-11-19 10:53:38.213837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.552 [2024-11-19 10:53:38.213876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.552 [2024-11-19 10:53:38.213914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:36472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.552 [2024-11-19 10:53:38.213971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.213995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:37264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.552 [2024-11-19 10:53:38.214026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.214047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.552 [2024-11-19 10:53:38.214063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:53.552 [2024-11-19 10:53:38.214084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.553 [2024-11-19 10:53:38.214100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.214121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.553 [2024-11-19 10:53:38.214136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.214157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.553 [2024-11-19 10:53:38.214172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.214193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.553 [2024-11-19 10:53:38.214209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.214229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.553 [2024-11-19 10:53:38.214245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.214266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.553 [2024-11-19 10:53:38.214281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.214329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:37336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.553 [2024-11-19 10:53:38.214347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.214369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.553 [2024-11-19 10:53:38.214386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.214408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.553 [2024-11-19 10:53:38.214424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.214446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.553 [2024-11-19 10:53:38.214467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.214491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.553 [2024-11-19 10:53:38.214508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.214530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.553 [2024-11-19 10:53:38.214546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.214569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.553 [2024-11-19 10:53:38.214585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.214622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.553 [2024-11-19 10:53:38.214638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.215919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.553 [2024-11-19 10:53:38.215946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.215992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.553 [2024-11-19 10:53:38.216014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.216037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.553 [2024-11-19 10:53:38.216054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.216077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.553 [2024-11-19 10:53:38.216095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.216117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.553 [2024-11-19 10:53:38.216134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.216156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.553 [2024-11-19 10:53:38.216173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.216196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.553 [2024-11-19 10:53:38.216212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.216235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.553 [2024-11-19 10:53:38.216252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.216741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.553 [2024-11-19 10:53:38.216767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.216795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.553 [2024-11-19 10:53:38.216813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.216836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.553 [2024-11-19 10:53:38.216853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.216875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.553 [2024-11-19 10:53:38.216896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.216920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:37256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.553 [2024-11-19 10:53:38.216938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.216960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.553 [2024-11-19 10:53:38.216977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:53.553 [2024-11-19 10:53:38.217004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.553 [2024-11-19 10:53:38.217022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:53.553 7918.53 IOPS, 30.93 MiB/s [2024-11-19T09:53:41.176Z] 7936.36 IOPS, 31.00 MiB/s [2024-11-19T09:53:41.176Z] 7951.21 IOPS, 31.06 MiB/s [2024-11-19T09:53:41.176Z] Received shutdown signal, test time was about 34.366964 seconds 00:25:53.553 00:25:53.553 Latency(us) 00:25:53.553 [2024-11-19T09:53:41.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.553 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:53.553 Verification LBA range: start 0x0 length 0x4000 00:25:53.553 Nvme0n1 : 34.37 7946.13 31.04 0.00 0.00 16066.29 338.30 4026531.84 00:25:53.553 [2024-11-19T09:53:41.176Z] =================================================================================================================== 00:25:53.554 [2024-11-19T09:53:41.177Z] Total : 7946.13 31.04 0.00 0.00 16066.29 338.30 4026531.84 00:25:53.812 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:53.812 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:53.812 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:53.812 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:53.812 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:53.812 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:53.812 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:53.812 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:53.812 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:53.812 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:53.812 rmmod nvme_tcp 00:25:54.070 rmmod nvme_fabrics 00:25:54.070 rmmod nvme_keyring 00:25:54.070 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:54.070 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:54.070 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:54.070 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1423107 ']' 00:25:54.070 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1423107 00:25:54.070 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1423107 ']' 00:25:54.070 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1423107 00:25:54.070 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:54.070 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:54.070 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1423107 00:25:54.070 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:54.070 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:54.070 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1423107' 00:25:54.070 killing process with pid 1423107 00:25:54.070 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1423107 00:25:54.070 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1423107 00:25:54.330 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:54.330 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:54.330 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:54.330 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:54.330 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:54.330 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:54.330 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:54.330 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:54.330 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:54.330 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.330 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.330 10:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.239 10:53:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:56.239 00:25:56.239 real 0m43.370s 00:25:56.239 user 2m11.302s 00:25:56.239 sys 0m11.125s 00:25:56.239 10:53:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:56.239 10:53:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:56.239 ************************************ 00:25:56.239 END TEST nvmf_host_multipath_status 00:25:56.239 ************************************ 00:25:56.239 10:53:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:56.239 10:53:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:56.239 10:53:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:56.239 10:53:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.239 ************************************ 00:25:56.239 START TEST nvmf_discovery_remove_ifc 00:25:56.239 ************************************ 00:25:56.239 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:56.498 * Looking for test storage... 00:25:56.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:56.498 10:53:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:56.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.498 --rc genhtml_branch_coverage=1 00:25:56.498 --rc genhtml_function_coverage=1 00:25:56.498 --rc genhtml_legend=1 00:25:56.498 --rc geninfo_all_blocks=1 00:25:56.498 --rc geninfo_unexecuted_blocks=1 00:25:56.498 00:25:56.498 ' 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:56.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.498 --rc genhtml_branch_coverage=1 00:25:56.498 --rc genhtml_function_coverage=1 00:25:56.498 --rc genhtml_legend=1 00:25:56.498 --rc geninfo_all_blocks=1 00:25:56.498 --rc geninfo_unexecuted_blocks=1 00:25:56.498 00:25:56.498 ' 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:56.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.498 --rc genhtml_branch_coverage=1 00:25:56.498 --rc genhtml_function_coverage=1 00:25:56.498 --rc genhtml_legend=1 00:25:56.498 --rc geninfo_all_blocks=1 00:25:56.498 --rc geninfo_unexecuted_blocks=1 00:25:56.498 00:25:56.498 ' 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:56.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.498 --rc genhtml_branch_coverage=1 00:25:56.498 --rc genhtml_function_coverage=1 00:25:56.498 --rc genhtml_legend=1 00:25:56.498 --rc geninfo_all_blocks=1 00:25:56.498 --rc geninfo_unexecuted_blocks=1 00:25:56.498 00:25:56.498 ' 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:56.498 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:56.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:56.499 10:53:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:59.032 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:59.032 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:59.032 Found net devices under 0000:09:00.0: cvl_0_0 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:59.032 Found net devices under 0000:09:00.1: cvl_0_1 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:59.032 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:59.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:59.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:25:59.033 00:25:59.033 --- 10.0.0.2 ping statistics --- 00:25:59.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.033 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:59.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:59.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:25:59.033 00:25:59.033 --- 10.0.0.1 ping statistics --- 00:25:59.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.033 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1429856 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1429856 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1429856 ']' 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:59.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.033 [2024-11-19 10:53:46.256146] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:25:59.033 [2024-11-19 10:53:46.256229] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.033 [2024-11-19 10:53:46.327025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.033 [2024-11-19 10:53:46.383030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.033 [2024-11-19 10:53:46.383084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.033 [2024-11-19 10:53:46.383112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:59.033 [2024-11-19 10:53:46.383123] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:59.033 [2024-11-19 10:53:46.383133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.033 [2024-11-19 10:53:46.383744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.033 [2024-11-19 10:53:46.521240] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:59.033 [2024-11-19 10:53:46.529438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:59.033 null0 00:25:59.033 [2024-11-19 10:53:46.561400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1429883 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1429883 /tmp/host.sock 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1429883 ']' 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:59.033 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:59.033 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.033 [2024-11-19 10:53:46.625442] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:25:59.033 [2024-11-19 10:53:46.625522] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1429883 ] 00:25:59.292 [2024-11-19 10:53:46.692650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.292 [2024-11-19 10:53:46.751738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.292 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:59.292 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:59.292 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:59.292 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:59.292 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.292 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.292 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.292 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:59.292 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.292 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.551 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.551 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:59.551 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.551 10:53:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.485 [2024-11-19 10:53:47.979039] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:00.485 [2024-11-19 10:53:47.979063] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:00.485 [2024-11-19 10:53:47.979090] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:00.485 [2024-11-19 10:53:48.066411] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:00.743 [2024-11-19 10:53:48.127163] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:00.743 [2024-11-19 10:53:48.128197] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x739be0:1 started. 00:26:00.743 [2024-11-19 10:53:48.129960] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:00.743 [2024-11-19 10:53:48.130019] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:00.743 [2024-11-19 10:53:48.130059] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:00.743 [2024-11-19 10:53:48.130081] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:00.743 [2024-11-19 10:53:48.130111] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:00.743 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.743 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:00.743 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:00.743 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.743 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:00.743 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.743 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.743 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:00.743 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:00.743 [2024-11-19 10:53:48.137154] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x739be0 was disconnected and freed. delete nvme_qpair. 00:26:00.743 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.743 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:00.743 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:00.743 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:00.743 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:00.743 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:00.743 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.743 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:00.743 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.744 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.744 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:00.744 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:00.744 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.744 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:00.744 10:53:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:01.677 10:53:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:01.677 10:53:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.677 10:53:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:01.677 10:53:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.677 10:53:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:01.677 10:53:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:01.677 10:53:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:01.677 10:53:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.933 10:53:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:01.933 10:53:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:02.866 10:53:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:02.866 10:53:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.866 10:53:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:02.866 10:53:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.866 10:53:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:02.866 10:53:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:02.866 10:53:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:02.866 10:53:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.866 10:53:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:02.866 10:53:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:03.800 10:53:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:03.800 10:53:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:03.801 10:53:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:03.801 10:53:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.801 10:53:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:03.801 10:53:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:03.801 10:53:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:03.801 10:53:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.801 10:53:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:03.801 10:53:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:05.176 10:53:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:05.176 10:53:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:05.176 10:53:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:05.176 10:53:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.176 10:53:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:05.176 10:53:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:05.176 10:53:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:05.176 10:53:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.176 10:53:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:05.176 10:53:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:06.110 10:53:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:06.110 10:53:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:06.110 10:53:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:06.110 10:53:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.110 10:53:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:06.110 10:53:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.110 10:53:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:06.110 10:53:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.110 10:53:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:06.110 10:53:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:06.110 [2024-11-19 10:53:53.571576] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:06.110 [2024-11-19 10:53:53.571660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.110 [2024-11-19 10:53:53.571697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.110 [2024-11-19 10:53:53.571716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.110 [2024-11-19 10:53:53.571729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.110 [2024-11-19 10:53:53.571742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.110 [2024-11-19 10:53:53.571754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.110 [2024-11-19 10:53:53.571766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.110 [2024-11-19 10:53:53.571779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.110 [2024-11-19 10:53:53.571791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.110 [2024-11-19 10:53:53.571803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.110 [2024-11-19 10:53:53.571815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716400 is same with the state(6) to be set 00:26:06.110 [2024-11-19 10:53:53.581603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x716400 (9): Bad file descriptor 00:26:06.110 [2024-11-19 10:53:53.591647] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:06.110 [2024-11-19 10:53:53.591668] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:06.110 [2024-11-19 10:53:53.591692] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:06.110 [2024-11-19 10:53:53.591700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:06.110 [2024-11-19 10:53:53.591754] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:07.043 10:53:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:07.043 10:53:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:07.043 10:53:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.043 10:53:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:07.043 10:53:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:07.043 10:53:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:07.043 10:53:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:07.043 [2024-11-19 10:53:54.646333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:07.043 [2024-11-19 10:53:54.646389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x716400 with addr=10.0.0.2, port=4420 00:26:07.043 [2024-11-19 10:53:54.646410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716400 is same with the state(6) to be set 00:26:07.043 [2024-11-19 10:53:54.646443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x716400 (9): Bad file descriptor 00:26:07.043 [2024-11-19 10:53:54.646857] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:07.043 [2024-11-19 10:53:54.646897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:07.043 [2024-11-19 10:53:54.646914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:07.043 [2024-11-19 10:53:54.646930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:07.043 [2024-11-19 10:53:54.646943] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:07.043 [2024-11-19 10:53:54.646952] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:07.043 [2024-11-19 10:53:54.646959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:07.043 [2024-11-19 10:53:54.646972] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:07.043 [2024-11-19 10:53:54.646981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:07.043 10:53:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.043 10:53:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:07.043 10:53:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:08.416 [2024-11-19 10:53:55.649468] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:08.416 [2024-11-19 10:53:55.649497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:08.416 [2024-11-19 10:53:55.649524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:08.416 [2024-11-19 10:53:55.649539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:08.416 [2024-11-19 10:53:55.649552] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:08.416 [2024-11-19 10:53:55.649564] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:08.416 [2024-11-19 10:53:55.649573] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:08.416 [2024-11-19 10:53:55.649581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:08.416 [2024-11-19 10:53:55.649639] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:08.416 [2024-11-19 10:53:55.649677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.416 [2024-11-19 10:53:55.649699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.416 [2024-11-19 10:53:55.649717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.416 [2024-11-19 10:53:55.649730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.416 [2024-11-19 10:53:55.649743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.416 [2024-11-19 10:53:55.649755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.416 [2024-11-19 10:53:55.649767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.416 [2024-11-19 10:53:55.649781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.416 [2024-11-19 10:53:55.649793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.416 [2024-11-19 10:53:55.649805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.416 [2024-11-19 10:53:55.649818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:08.416 [2024-11-19 10:53:55.649941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x705b40 (9): Bad file descriptor 00:26:08.416 [2024-11-19 10:53:55.650963] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:08.416 [2024-11-19 10:53:55.650985] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:08.416 10:53:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:08.416 10:53:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:08.416 10:53:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:08.416 10:53:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.417 10:53:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:08.417 10:53:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:08.417 10:53:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:08.417 10:53:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.417 10:53:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:08.417 10:53:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:08.417 10:53:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:08.417 10:53:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:08.417 10:53:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:08.417 10:53:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:08.417 10:53:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:08.417 10:53:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.417 10:53:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:08.417 10:53:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:08.417 10:53:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:08.417 10:53:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.417 10:53:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:08.417 10:53:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:09.351 10:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:09.351 10:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:09.351 10:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:09.351 10:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.351 10:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:09.351 10:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:09.351 10:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:09.351 10:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.351 10:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:09.351 10:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:10.330 [2024-11-19 10:53:57.705491] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:10.330 [2024-11-19 10:53:57.705519] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:10.330 [2024-11-19 10:53:57.705542] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:10.330 [2024-11-19 10:53:57.791840] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:10.330 10:53:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:10.330 10:53:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:10.330 10:53:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:10.330 10:53:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.330 10:53:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.330 10:53:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:10.330 10:53:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:10.330 10:53:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.330 10:53:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:10.330 10:53:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:10.587 [2024-11-19 10:53:58.014094] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:10.587 [2024-11-19 10:53:58.014867] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x720be0:1 started. 00:26:10.587 [2024-11-19 10:53:58.016200] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:10.588 [2024-11-19 10:53:58.016242] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:10.588 [2024-11-19 10:53:58.016275] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:10.588 [2024-11-19 10:53:58.016295] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:10.588 [2024-11-19 10:53:58.016330] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:10.588 [2024-11-19 10:53:58.023245] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x720be0 was disconnected and freed. delete nvme_qpair. 00:26:11.521 10:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:11.521 10:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:11.521 10:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:11.521 10:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.521 10:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:11.521 10:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:11.521 10:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:11.521 10:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.521 10:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:11.521 10:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:11.521 10:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1429883 00:26:11.521 10:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1429883 ']' 00:26:11.521 10:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1429883 00:26:11.521 10:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:11.521 10:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:11.521 10:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1429883 00:26:11.521 10:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:11.521 10:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:11.521 10:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1429883' 00:26:11.521 killing process with pid 1429883 00:26:11.521 10:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1429883 00:26:11.521 10:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1429883 00:26:11.778 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:11.778 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:11.778 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:11.778 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:11.778 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:11.778 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:11.778 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:11.778 rmmod nvme_tcp 00:26:11.778 rmmod nvme_fabrics 00:26:11.778 rmmod nvme_keyring 00:26:11.778 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:11.778 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:11.778 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:11.778 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1429856 ']' 00:26:11.778 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1429856 00:26:11.778 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1429856 ']' 00:26:11.778 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1429856 00:26:11.778 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:11.778 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:11.778 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1429856 00:26:11.778 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:11.778 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:11.778 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1429856' 00:26:11.778 killing process with pid 1429856 00:26:11.778 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1429856 00:26:11.778 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1429856 00:26:12.036 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:12.036 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:12.036 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:12.036 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:12.036 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:12.036 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:12.036 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:12.036 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:12.036 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:12.036 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.036 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.036 10:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.939 10:54:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:13.939 00:26:13.939 real 0m17.691s 00:26:13.939 user 0m25.658s 00:26:13.939 sys 0m2.982s 00:26:13.939 10:54:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:13.939 10:54:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.939 ************************************ 00:26:13.939 END TEST nvmf_discovery_remove_ifc 00:26:13.939 ************************************ 00:26:13.939 10:54:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:13.939 10:54:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:13.939 10:54:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:13.939 10:54:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.197 ************************************ 00:26:14.197 START TEST nvmf_identify_kernel_target 00:26:14.197 ************************************ 00:26:14.197 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:14.197 * Looking for test storage... 00:26:14.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:14.197 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:14.197 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:26:14.197 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:14.197 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:14.197 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:14.197 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:14.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.198 --rc genhtml_branch_coverage=1 00:26:14.198 --rc genhtml_function_coverage=1 00:26:14.198 --rc genhtml_legend=1 00:26:14.198 --rc geninfo_all_blocks=1 00:26:14.198 --rc geninfo_unexecuted_blocks=1 00:26:14.198 00:26:14.198 ' 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:14.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.198 --rc genhtml_branch_coverage=1 00:26:14.198 --rc genhtml_function_coverage=1 00:26:14.198 --rc genhtml_legend=1 00:26:14.198 --rc geninfo_all_blocks=1 00:26:14.198 --rc geninfo_unexecuted_blocks=1 00:26:14.198 00:26:14.198 ' 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:14.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.198 --rc genhtml_branch_coverage=1 00:26:14.198 --rc genhtml_function_coverage=1 00:26:14.198 --rc genhtml_legend=1 00:26:14.198 --rc geninfo_all_blocks=1 00:26:14.198 --rc geninfo_unexecuted_blocks=1 00:26:14.198 00:26:14.198 ' 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:14.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.198 --rc genhtml_branch_coverage=1 00:26:14.198 --rc genhtml_function_coverage=1 00:26:14.198 --rc genhtml_legend=1 00:26:14.198 --rc geninfo_all_blocks=1 00:26:14.198 --rc geninfo_unexecuted_blocks=1 00:26:14.198 00:26:14.198 ' 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.198 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:14.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:14.199 10:54:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:16.730 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:16.730 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:16.730 Found net devices under 0000:09:00.0: cvl_0_0 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:16.730 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:16.731 Found net devices under 0000:09:00.1: cvl_0_1 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:16.731 10:54:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:16.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:16.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:26:16.731 00:26:16.731 --- 10.0.0.2 ping statistics --- 00:26:16.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.731 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:16.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:16.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:26:16.731 00:26:16.731 --- 10.0.0.1 ping statistics --- 00:26:16.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.731 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:16.731 10:54:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:17.668 Waiting for block devices as requested 00:26:17.926 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:17.926 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:17.926 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:18.186 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:18.186 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:18.186 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:18.186 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:18.447 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:18.447 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:26:18.447 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:18.707 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:18.707 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:18.707 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:18.966 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:18.966 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:18.966 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:18.966 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:19.225 No valid GPT data, bailing 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:19.225 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:26:19.485 00:26:19.485 Discovery Log Number of Records 2, Generation counter 2 00:26:19.485 =====Discovery Log Entry 0====== 00:26:19.485 trtype: tcp 00:26:19.485 adrfam: ipv4 00:26:19.485 subtype: current discovery subsystem 00:26:19.485 treq: not specified, sq flow control disable supported 00:26:19.485 portid: 1 00:26:19.485 trsvcid: 4420 00:26:19.485 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:19.485 traddr: 10.0.0.1 00:26:19.485 eflags: none 00:26:19.485 sectype: none 00:26:19.485 =====Discovery Log Entry 1====== 00:26:19.485 trtype: tcp 00:26:19.485 adrfam: ipv4 00:26:19.485 subtype: nvme subsystem 00:26:19.485 treq: not specified, sq flow control disable supported 00:26:19.485 portid: 1 00:26:19.485 trsvcid: 4420 00:26:19.485 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:19.485 traddr: 10.0.0.1 00:26:19.485 eflags: none 00:26:19.485 sectype: none 00:26:19.485 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:19.485 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:19.485 ===================================================== 00:26:19.485 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:19.485 ===================================================== 00:26:19.485 Controller Capabilities/Features 00:26:19.485 ================================ 00:26:19.485 Vendor ID: 0000 00:26:19.485 Subsystem Vendor ID: 0000 00:26:19.485 Serial Number: 5d5086641328510a4cd2 00:26:19.485 Model Number: Linux 00:26:19.485 Firmware Version: 6.8.9-20 00:26:19.485 Recommended Arb Burst: 0 00:26:19.485 IEEE OUI Identifier: 00 00 00 00:26:19.485 Multi-path I/O 00:26:19.485 May have multiple subsystem ports: No 00:26:19.485 May have multiple controllers: No 00:26:19.485 Associated with SR-IOV VF: No 00:26:19.485 Max Data Transfer Size: Unlimited 00:26:19.485 Max Number of Namespaces: 0 00:26:19.485 Max Number of I/O Queues: 1024 00:26:19.485 NVMe Specification Version (VS): 1.3 00:26:19.485 NVMe Specification Version (Identify): 1.3 00:26:19.485 Maximum Queue Entries: 1024 00:26:19.485 Contiguous Queues Required: No 00:26:19.485 Arbitration Mechanisms Supported 00:26:19.485 Weighted Round Robin: Not Supported 00:26:19.485 Vendor Specific: Not Supported 00:26:19.485 Reset Timeout: 7500 ms 00:26:19.485 Doorbell Stride: 4 bytes 00:26:19.485 NVM Subsystem Reset: Not Supported 00:26:19.485 Command Sets Supported 00:26:19.485 NVM Command Set: Supported 00:26:19.485 Boot Partition: Not Supported 00:26:19.485 Memory Page Size Minimum: 4096 bytes 00:26:19.485 Memory Page Size Maximum: 4096 bytes 00:26:19.485 Persistent Memory Region: Not Supported 00:26:19.485 Optional Asynchronous Events Supported 00:26:19.485 Namespace Attribute Notices: Not Supported 00:26:19.485 Firmware Activation Notices: Not Supported 00:26:19.485 ANA Change Notices: Not Supported 00:26:19.485 PLE Aggregate Log Change Notices: Not Supported 00:26:19.485 LBA Status Info Alert Notices: Not Supported 00:26:19.485 EGE Aggregate Log Change Notices: Not Supported 00:26:19.485 Normal NVM Subsystem Shutdown event: Not Supported 00:26:19.485 Zone Descriptor Change Notices: Not Supported 00:26:19.485 Discovery Log Change Notices: Supported 00:26:19.485 Controller Attributes 00:26:19.485 128-bit Host Identifier: Not Supported 00:26:19.485 Non-Operational Permissive Mode: Not Supported 00:26:19.485 NVM Sets: Not Supported 00:26:19.485 Read Recovery Levels: Not Supported 00:26:19.485 Endurance Groups: Not Supported 00:26:19.485 Predictable Latency Mode: Not Supported 00:26:19.485 Traffic Based Keep ALive: Not Supported 00:26:19.485 Namespace Granularity: Not Supported 00:26:19.485 SQ Associations: Not Supported 00:26:19.485 UUID List: Not Supported 00:26:19.485 Multi-Domain Subsystem: Not Supported 00:26:19.485 Fixed Capacity Management: Not Supported 00:26:19.485 Variable Capacity Management: Not Supported 00:26:19.485 Delete Endurance Group: Not Supported 00:26:19.485 Delete NVM Set: Not Supported 00:26:19.485 Extended LBA Formats Supported: Not Supported 00:26:19.485 Flexible Data Placement Supported: Not Supported 00:26:19.485 00:26:19.485 Controller Memory Buffer Support 00:26:19.485 ================================ 00:26:19.485 Supported: No 00:26:19.485 00:26:19.485 Persistent Memory Region Support 00:26:19.485 ================================ 00:26:19.485 Supported: No 00:26:19.485 00:26:19.485 Admin Command Set Attributes 00:26:19.485 ============================ 00:26:19.485 Security Send/Receive: Not Supported 00:26:19.485 Format NVM: Not Supported 00:26:19.485 Firmware Activate/Download: Not Supported 00:26:19.485 Namespace Management: Not Supported 00:26:19.486 Device Self-Test: Not Supported 00:26:19.486 Directives: Not Supported 00:26:19.486 NVMe-MI: Not Supported 00:26:19.486 Virtualization Management: Not Supported 00:26:19.486 Doorbell Buffer Config: Not Supported 00:26:19.486 Get LBA Status Capability: Not Supported 00:26:19.486 Command & Feature Lockdown Capability: Not Supported 00:26:19.486 Abort Command Limit: 1 00:26:19.486 Async Event Request Limit: 1 00:26:19.486 Number of Firmware Slots: N/A 00:26:19.486 Firmware Slot 1 Read-Only: N/A 00:26:19.486 Firmware Activation Without Reset: N/A 00:26:19.486 Multiple Update Detection Support: N/A 00:26:19.486 Firmware Update Granularity: No Information Provided 00:26:19.486 Per-Namespace SMART Log: No 00:26:19.486 Asymmetric Namespace Access Log Page: Not Supported 00:26:19.486 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:19.486 Command Effects Log Page: Not Supported 00:26:19.486 Get Log Page Extended Data: Supported 00:26:19.486 Telemetry Log Pages: Not Supported 00:26:19.486 Persistent Event Log Pages: Not Supported 00:26:19.486 Supported Log Pages Log Page: May Support 00:26:19.486 Commands Supported & Effects Log Page: Not Supported 00:26:19.486 Feature Identifiers & Effects Log Page:May Support 00:26:19.486 NVMe-MI Commands & Effects Log Page: May Support 00:26:19.486 Data Area 4 for Telemetry Log: Not Supported 00:26:19.486 Error Log Page Entries Supported: 1 00:26:19.486 Keep Alive: Not Supported 00:26:19.486 00:26:19.486 NVM Command Set Attributes 00:26:19.486 ========================== 00:26:19.486 Submission Queue Entry Size 00:26:19.486 Max: 1 00:26:19.486 Min: 1 00:26:19.486 Completion Queue Entry Size 00:26:19.486 Max: 1 00:26:19.486 Min: 1 00:26:19.486 Number of Namespaces: 0 00:26:19.486 Compare Command: Not Supported 00:26:19.486 Write Uncorrectable Command: Not Supported 00:26:19.486 Dataset Management Command: Not Supported 00:26:19.486 Write Zeroes Command: Not Supported 00:26:19.486 Set Features Save Field: Not Supported 00:26:19.486 Reservations: Not Supported 00:26:19.486 Timestamp: Not Supported 00:26:19.486 Copy: Not Supported 00:26:19.486 Volatile Write Cache: Not Present 00:26:19.486 Atomic Write Unit (Normal): 1 00:26:19.486 Atomic Write Unit (PFail): 1 00:26:19.486 Atomic Compare & Write Unit: 1 00:26:19.486 Fused Compare & Write: Not Supported 00:26:19.486 Scatter-Gather List 00:26:19.486 SGL Command Set: Supported 00:26:19.486 SGL Keyed: Not Supported 00:26:19.486 SGL Bit Bucket Descriptor: Not Supported 00:26:19.486 SGL Metadata Pointer: Not Supported 00:26:19.486 Oversized SGL: Not Supported 00:26:19.486 SGL Metadata Address: Not Supported 00:26:19.486 SGL Offset: Supported 00:26:19.486 Transport SGL Data Block: Not Supported 00:26:19.486 Replay Protected Memory Block: Not Supported 00:26:19.486 00:26:19.486 Firmware Slot Information 00:26:19.486 ========================= 00:26:19.486 Active slot: 0 00:26:19.486 00:26:19.486 00:26:19.486 Error Log 00:26:19.486 ========= 00:26:19.486 00:26:19.486 Active Namespaces 00:26:19.486 ================= 00:26:19.486 Discovery Log Page 00:26:19.486 ================== 00:26:19.486 Generation Counter: 2 00:26:19.486 Number of Records: 2 00:26:19.486 Record Format: 0 00:26:19.486 00:26:19.486 Discovery Log Entry 0 00:26:19.486 ---------------------- 00:26:19.486 Transport Type: 3 (TCP) 00:26:19.486 Address Family: 1 (IPv4) 00:26:19.486 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:19.486 Entry Flags: 00:26:19.486 Duplicate Returned Information: 0 00:26:19.486 Explicit Persistent Connection Support for Discovery: 0 00:26:19.486 Transport Requirements: 00:26:19.486 Secure Channel: Not Specified 00:26:19.486 Port ID: 1 (0x0001) 00:26:19.486 Controller ID: 65535 (0xffff) 00:26:19.486 Admin Max SQ Size: 32 00:26:19.486 Transport Service Identifier: 4420 00:26:19.486 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:19.486 Transport Address: 10.0.0.1 00:26:19.486 Discovery Log Entry 1 00:26:19.486 ---------------------- 00:26:19.486 Transport Type: 3 (TCP) 00:26:19.486 Address Family: 1 (IPv4) 00:26:19.486 Subsystem Type: 2 (NVM Subsystem) 00:26:19.486 Entry Flags: 00:26:19.486 Duplicate Returned Information: 0 00:26:19.486 Explicit Persistent Connection Support for Discovery: 0 00:26:19.486 Transport Requirements: 00:26:19.486 Secure Channel: Not Specified 00:26:19.486 Port ID: 1 (0x0001) 00:26:19.486 Controller ID: 65535 (0xffff) 00:26:19.486 Admin Max SQ Size: 32 00:26:19.486 Transport Service Identifier: 4420 00:26:19.486 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:19.486 Transport Address: 10.0.0.1 00:26:19.486 10:54:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:19.486 get_feature(0x01) failed 00:26:19.486 get_feature(0x02) failed 00:26:19.486 get_feature(0x04) failed 00:26:19.486 ===================================================== 00:26:19.486 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:19.486 ===================================================== 00:26:19.486 Controller Capabilities/Features 00:26:19.486 ================================ 00:26:19.486 Vendor ID: 0000 00:26:19.486 Subsystem Vendor ID: 0000 00:26:19.486 Serial Number: 1d40af9aee8f50018a5c 00:26:19.486 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:19.486 Firmware Version: 6.8.9-20 00:26:19.486 Recommended Arb Burst: 6 00:26:19.486 IEEE OUI Identifier: 00 00 00 00:26:19.486 Multi-path I/O 00:26:19.486 May have multiple subsystem ports: Yes 00:26:19.486 May have multiple controllers: Yes 00:26:19.486 Associated with SR-IOV VF: No 00:26:19.486 Max Data Transfer Size: Unlimited 00:26:19.486 Max Number of Namespaces: 1024 00:26:19.486 Max Number of I/O Queues: 128 00:26:19.486 NVMe Specification Version (VS): 1.3 00:26:19.486 NVMe Specification Version (Identify): 1.3 00:26:19.486 Maximum Queue Entries: 1024 00:26:19.486 Contiguous Queues Required: No 00:26:19.486 Arbitration Mechanisms Supported 00:26:19.486 Weighted Round Robin: Not Supported 00:26:19.486 Vendor Specific: Not Supported 00:26:19.486 Reset Timeout: 7500 ms 00:26:19.486 Doorbell Stride: 4 bytes 00:26:19.486 NVM Subsystem Reset: Not Supported 00:26:19.486 Command Sets Supported 00:26:19.486 NVM Command Set: Supported 00:26:19.486 Boot Partition: Not Supported 00:26:19.486 Memory Page Size Minimum: 4096 bytes 00:26:19.486 Memory Page Size Maximum: 4096 bytes 00:26:19.486 Persistent Memory Region: Not Supported 00:26:19.486 Optional Asynchronous Events Supported 00:26:19.486 Namespace Attribute Notices: Supported 00:26:19.486 Firmware Activation Notices: Not Supported 00:26:19.486 ANA Change Notices: Supported 00:26:19.486 PLE Aggregate Log Change Notices: Not Supported 00:26:19.486 LBA Status Info Alert Notices: Not Supported 00:26:19.486 EGE Aggregate Log Change Notices: Not Supported 00:26:19.486 Normal NVM Subsystem Shutdown event: Not Supported 00:26:19.486 Zone Descriptor Change Notices: Not Supported 00:26:19.486 Discovery Log Change Notices: Not Supported 00:26:19.486 Controller Attributes 00:26:19.486 128-bit Host Identifier: Supported 00:26:19.486 Non-Operational Permissive Mode: Not Supported 00:26:19.486 NVM Sets: Not Supported 00:26:19.486 Read Recovery Levels: Not Supported 00:26:19.486 Endurance Groups: Not Supported 00:26:19.486 Predictable Latency Mode: Not Supported 00:26:19.486 Traffic Based Keep ALive: Supported 00:26:19.487 Namespace Granularity: Not Supported 00:26:19.487 SQ Associations: Not Supported 00:26:19.487 UUID List: Not Supported 00:26:19.487 Multi-Domain Subsystem: Not Supported 00:26:19.487 Fixed Capacity Management: Not Supported 00:26:19.487 Variable Capacity Management: Not Supported 00:26:19.487 Delete Endurance Group: Not Supported 00:26:19.487 Delete NVM Set: Not Supported 00:26:19.487 Extended LBA Formats Supported: Not Supported 00:26:19.487 Flexible Data Placement Supported: Not Supported 00:26:19.487 00:26:19.487 Controller Memory Buffer Support 00:26:19.487 ================================ 00:26:19.487 Supported: No 00:26:19.487 00:26:19.487 Persistent Memory Region Support 00:26:19.487 ================================ 00:26:19.487 Supported: No 00:26:19.487 00:26:19.487 Admin Command Set Attributes 00:26:19.487 ============================ 00:26:19.487 Security Send/Receive: Not Supported 00:26:19.487 Format NVM: Not Supported 00:26:19.487 Firmware Activate/Download: Not Supported 00:26:19.487 Namespace Management: Not Supported 00:26:19.487 Device Self-Test: Not Supported 00:26:19.487 Directives: Not Supported 00:26:19.487 NVMe-MI: Not Supported 00:26:19.487 Virtualization Management: Not Supported 00:26:19.487 Doorbell Buffer Config: Not Supported 00:26:19.487 Get LBA Status Capability: Not Supported 00:26:19.487 Command & Feature Lockdown Capability: Not Supported 00:26:19.487 Abort Command Limit: 4 00:26:19.487 Async Event Request Limit: 4 00:26:19.487 Number of Firmware Slots: N/A 00:26:19.487 Firmware Slot 1 Read-Only: N/A 00:26:19.487 Firmware Activation Without Reset: N/A 00:26:19.487 Multiple Update Detection Support: N/A 00:26:19.487 Firmware Update Granularity: No Information Provided 00:26:19.487 Per-Namespace SMART Log: Yes 00:26:19.487 Asymmetric Namespace Access Log Page: Supported 00:26:19.487 ANA Transition Time : 10 sec 00:26:19.487 00:26:19.487 Asymmetric Namespace Access Capabilities 00:26:19.487 ANA Optimized State : Supported 00:26:19.487 ANA Non-Optimized State : Supported 00:26:19.487 ANA Inaccessible State : Supported 00:26:19.487 ANA Persistent Loss State : Supported 00:26:19.487 ANA Change State : Supported 00:26:19.487 ANAGRPID is not changed : No 00:26:19.487 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:19.487 00:26:19.487 ANA Group Identifier Maximum : 128 00:26:19.487 Number of ANA Group Identifiers : 128 00:26:19.487 Max Number of Allowed Namespaces : 1024 00:26:19.487 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:19.487 Command Effects Log Page: Supported 00:26:19.487 Get Log Page Extended Data: Supported 00:26:19.487 Telemetry Log Pages: Not Supported 00:26:19.487 Persistent Event Log Pages: Not Supported 00:26:19.487 Supported Log Pages Log Page: May Support 00:26:19.487 Commands Supported & Effects Log Page: Not Supported 00:26:19.487 Feature Identifiers & Effects Log Page:May Support 00:26:19.487 NVMe-MI Commands & Effects Log Page: May Support 00:26:19.487 Data Area 4 for Telemetry Log: Not Supported 00:26:19.487 Error Log Page Entries Supported: 128 00:26:19.487 Keep Alive: Supported 00:26:19.487 Keep Alive Granularity: 1000 ms 00:26:19.487 00:26:19.487 NVM Command Set Attributes 00:26:19.487 ========================== 00:26:19.487 Submission Queue Entry Size 00:26:19.487 Max: 64 00:26:19.487 Min: 64 00:26:19.487 Completion Queue Entry Size 00:26:19.487 Max: 16 00:26:19.487 Min: 16 00:26:19.487 Number of Namespaces: 1024 00:26:19.487 Compare Command: Not Supported 00:26:19.487 Write Uncorrectable Command: Not Supported 00:26:19.487 Dataset Management Command: Supported 00:26:19.487 Write Zeroes Command: Supported 00:26:19.487 Set Features Save Field: Not Supported 00:26:19.487 Reservations: Not Supported 00:26:19.487 Timestamp: Not Supported 00:26:19.487 Copy: Not Supported 00:26:19.487 Volatile Write Cache: Present 00:26:19.487 Atomic Write Unit (Normal): 1 00:26:19.487 Atomic Write Unit (PFail): 1 00:26:19.487 Atomic Compare & Write Unit: 1 00:26:19.487 Fused Compare & Write: Not Supported 00:26:19.487 Scatter-Gather List 00:26:19.487 SGL Command Set: Supported 00:26:19.487 SGL Keyed: Not Supported 00:26:19.487 SGL Bit Bucket Descriptor: Not Supported 00:26:19.487 SGL Metadata Pointer: Not Supported 00:26:19.487 Oversized SGL: Not Supported 00:26:19.487 SGL Metadata Address: Not Supported 00:26:19.487 SGL Offset: Supported 00:26:19.487 Transport SGL Data Block: Not Supported 00:26:19.487 Replay Protected Memory Block: Not Supported 00:26:19.487 00:26:19.487 Firmware Slot Information 00:26:19.487 ========================= 00:26:19.487 Active slot: 0 00:26:19.487 00:26:19.487 Asymmetric Namespace Access 00:26:19.487 =========================== 00:26:19.487 Change Count : 0 00:26:19.487 Number of ANA Group Descriptors : 1 00:26:19.487 ANA Group Descriptor : 0 00:26:19.487 ANA Group ID : 1 00:26:19.487 Number of NSID Values : 1 00:26:19.487 Change Count : 0 00:26:19.487 ANA State : 1 00:26:19.487 Namespace Identifier : 1 00:26:19.487 00:26:19.487 Commands Supported and Effects 00:26:19.487 ============================== 00:26:19.487 Admin Commands 00:26:19.487 -------------- 00:26:19.487 Get Log Page (02h): Supported 00:26:19.487 Identify (06h): Supported 00:26:19.487 Abort (08h): Supported 00:26:19.487 Set Features (09h): Supported 00:26:19.487 Get Features (0Ah): Supported 00:26:19.487 Asynchronous Event Request (0Ch): Supported 00:26:19.487 Keep Alive (18h): Supported 00:26:19.487 I/O Commands 00:26:19.487 ------------ 00:26:19.487 Flush (00h): Supported 00:26:19.487 Write (01h): Supported LBA-Change 00:26:19.487 Read (02h): Supported 00:26:19.487 Write Zeroes (08h): Supported LBA-Change 00:26:19.487 Dataset Management (09h): Supported 00:26:19.487 00:26:19.487 Error Log 00:26:19.487 ========= 00:26:19.487 Entry: 0 00:26:19.487 Error Count: 0x3 00:26:19.487 Submission Queue Id: 0x0 00:26:19.487 Command Id: 0x5 00:26:19.487 Phase Bit: 0 00:26:19.487 Status Code: 0x2 00:26:19.487 Status Code Type: 0x0 00:26:19.487 Do Not Retry: 1 00:26:19.487 Error Location: 0x28 00:26:19.487 LBA: 0x0 00:26:19.487 Namespace: 0x0 00:26:19.487 Vendor Log Page: 0x0 00:26:19.487 ----------- 00:26:19.487 Entry: 1 00:26:19.487 Error Count: 0x2 00:26:19.487 Submission Queue Id: 0x0 00:26:19.487 Command Id: 0x5 00:26:19.487 Phase Bit: 0 00:26:19.487 Status Code: 0x2 00:26:19.487 Status Code Type: 0x0 00:26:19.487 Do Not Retry: 1 00:26:19.487 Error Location: 0x28 00:26:19.487 LBA: 0x0 00:26:19.487 Namespace: 0x0 00:26:19.487 Vendor Log Page: 0x0 00:26:19.487 ----------- 00:26:19.487 Entry: 2 00:26:19.487 Error Count: 0x1 00:26:19.487 Submission Queue Id: 0x0 00:26:19.487 Command Id: 0x4 00:26:19.487 Phase Bit: 0 00:26:19.487 Status Code: 0x2 00:26:19.487 Status Code Type: 0x0 00:26:19.487 Do Not Retry: 1 00:26:19.487 Error Location: 0x28 00:26:19.487 LBA: 0x0 00:26:19.487 Namespace: 0x0 00:26:19.487 Vendor Log Page: 0x0 00:26:19.487 00:26:19.487 Number of Queues 00:26:19.487 ================ 00:26:19.487 Number of I/O Submission Queues: 128 00:26:19.487 Number of I/O Completion Queues: 128 00:26:19.487 00:26:19.487 ZNS Specific Controller Data 00:26:19.488 ============================ 00:26:19.488 Zone Append Size Limit: 0 00:26:19.488 00:26:19.488 00:26:19.488 Active Namespaces 00:26:19.488 ================= 00:26:19.488 get_feature(0x05) failed 00:26:19.488 Namespace ID:1 00:26:19.488 Command Set Identifier: NVM (00h) 00:26:19.488 Deallocate: Supported 00:26:19.488 Deallocated/Unwritten Error: Not Supported 00:26:19.488 Deallocated Read Value: Unknown 00:26:19.488 Deallocate in Write Zeroes: Not Supported 00:26:19.488 Deallocated Guard Field: 0xFFFF 00:26:19.488 Flush: Supported 00:26:19.488 Reservation: Not Supported 00:26:19.488 Namespace Sharing Capabilities: Multiple Controllers 00:26:19.488 Size (in LBAs): 1953525168 (931GiB) 00:26:19.488 Capacity (in LBAs): 1953525168 (931GiB) 00:26:19.488 Utilization (in LBAs): 1953525168 (931GiB) 00:26:19.488 UUID: 105e2194-b0a8-4852-afd0-469c96196eb6 00:26:19.488 Thin Provisioning: Not Supported 00:26:19.488 Per-NS Atomic Units: Yes 00:26:19.488 Atomic Boundary Size (Normal): 0 00:26:19.488 Atomic Boundary Size (PFail): 0 00:26:19.488 Atomic Boundary Offset: 0 00:26:19.488 NGUID/EUI64 Never Reused: No 00:26:19.488 ANA group ID: 1 00:26:19.488 Namespace Write Protected: No 00:26:19.488 Number of LBA Formats: 1 00:26:19.488 Current LBA Format: LBA Format #00 00:26:19.488 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:19.488 00:26:19.488 10:54:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:19.488 10:54:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:19.488 10:54:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:19.488 10:54:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:19.488 10:54:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:19.488 10:54:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:19.488 10:54:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:19.488 rmmod nvme_tcp 00:26:19.488 rmmod nvme_fabrics 00:26:19.746 10:54:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:19.746 10:54:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:19.746 10:54:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:19.746 10:54:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:19.746 10:54:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:19.746 10:54:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:19.746 10:54:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:19.746 10:54:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:19.746 10:54:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:19.746 10:54:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:19.746 10:54:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:19.746 10:54:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:19.746 10:54:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:19.746 10:54:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.746 10:54:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:19.746 10:54:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.648 10:54:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:21.648 10:54:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:21.648 10:54:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:21.648 10:54:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:26:21.648 10:54:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:21.648 10:54:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:21.648 10:54:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:21.648 10:54:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:21.648 10:54:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:21.648 10:54:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:21.648 10:54:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:23.023 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:23.023 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:23.023 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:23.023 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:23.023 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:23.023 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:23.023 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:23.023 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:23.023 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:23.023 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:23.023 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:23.023 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:23.023 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:23.023 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:23.023 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:23.023 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:23.960 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:26:24.219 00:26:24.219 real 0m10.108s 00:26:24.219 user 0m2.200s 00:26:24.219 sys 0m3.781s 00:26:24.219 10:54:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:24.219 10:54:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:24.219 ************************************ 00:26:24.219 END TEST nvmf_identify_kernel_target 00:26:24.219 ************************************ 00:26:24.219 10:54:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:24.219 10:54:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:24.219 10:54:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:24.219 10:54:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.219 ************************************ 00:26:24.219 START TEST nvmf_auth_host 00:26:24.219 ************************************ 00:26:24.219 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:24.219 * Looking for test storage... 00:26:24.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:24.219 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:24.219 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:26:24.219 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:24.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.479 --rc genhtml_branch_coverage=1 00:26:24.479 --rc genhtml_function_coverage=1 00:26:24.479 --rc genhtml_legend=1 00:26:24.479 --rc geninfo_all_blocks=1 00:26:24.479 --rc geninfo_unexecuted_blocks=1 00:26:24.479 00:26:24.479 ' 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:24.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.479 --rc genhtml_branch_coverage=1 00:26:24.479 --rc genhtml_function_coverage=1 00:26:24.479 --rc genhtml_legend=1 00:26:24.479 --rc geninfo_all_blocks=1 00:26:24.479 --rc geninfo_unexecuted_blocks=1 00:26:24.479 00:26:24.479 ' 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:24.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.479 --rc genhtml_branch_coverage=1 00:26:24.479 --rc genhtml_function_coverage=1 00:26:24.479 --rc genhtml_legend=1 00:26:24.479 --rc geninfo_all_blocks=1 00:26:24.479 --rc geninfo_unexecuted_blocks=1 00:26:24.479 00:26:24.479 ' 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:24.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.479 --rc genhtml_branch_coverage=1 00:26:24.479 --rc genhtml_function_coverage=1 00:26:24.479 --rc genhtml_legend=1 00:26:24.479 --rc geninfo_all_blocks=1 00:26:24.479 --rc geninfo_unexecuted_blocks=1 00:26:24.479 00:26:24.479 ' 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:24.479 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:24.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:24.480 10:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.006 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:27.006 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:27.006 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:27.006 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:27.006 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:27.006 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:27.006 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:27.006 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:27.006 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:27.007 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:27.007 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:27.007 Found net devices under 0000:09:00.0: cvl_0_0 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:27.007 Found net devices under 0000:09:00.1: cvl_0_1 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:27.007 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:27.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:27.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:26:27.007 00:26:27.007 --- 10.0.0.2 ping statistics --- 00:26:27.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.007 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:27.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:27.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:26:27.008 00:26:27.008 --- 10.0.0.1 ping statistics --- 00:26:27.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.008 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1437718 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1437718 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1437718 ']' 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=664d0a1db76a3017ee86b103927dd26f 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.wzQ 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 664d0a1db76a3017ee86b103927dd26f 0 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 664d0a1db76a3017ee86b103927dd26f 0 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=664d0a1db76a3017ee86b103927dd26f 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:27.008 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.wzQ 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.wzQ 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.wzQ 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=af7faac659b6a2055df35050d3a504a0f899bca00633ab40a0158abe4f2da774 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.b1y 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key af7faac659b6a2055df35050d3a504a0f899bca00633ab40a0158abe4f2da774 3 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 af7faac659b6a2055df35050d3a504a0f899bca00633ab40a0158abe4f2da774 3 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=af7faac659b6a2055df35050d3a504a0f899bca00633ab40a0158abe4f2da774 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.b1y 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.b1y 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.b1y 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a11b82b14c22fa253bbc18440586c654083784596444eacc 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.w9d 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a11b82b14c22fa253bbc18440586c654083784596444eacc 0 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a11b82b14c22fa253bbc18440586c654083784596444eacc 0 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a11b82b14c22fa253bbc18440586c654083784596444eacc 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.w9d 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.w9d 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.w9d 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=268fe0050652c2820cad01b589e3c8bc758dbbeab7100d6c 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.qfb 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 268fe0050652c2820cad01b589e3c8bc758dbbeab7100d6c 2 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 268fe0050652c2820cad01b589e3c8bc758dbbeab7100d6c 2 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=268fe0050652c2820cad01b589e3c8bc758dbbeab7100d6c 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.qfb 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.qfb 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.qfb 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9b9b526b48a686e8c91968306f37e4af 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.y7v 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9b9b526b48a686e8c91968306f37e4af 1 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9b9b526b48a686e8c91968306f37e4af 1 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9b9b526b48a686e8c91968306f37e4af 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.y7v 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.y7v 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.y7v 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e7d224bb8ba3414d6d6d7bf803777b09 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.n4O 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e7d224bb8ba3414d6d6d7bf803777b09 1 00:26:27.265 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e7d224bb8ba3414d6d6d7bf803777b09 1 00:26:27.266 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:27.266 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:27.266 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e7d224bb8ba3414d6d6d7bf803777b09 00:26:27.266 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:27.266 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.n4O 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.n4O 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.n4O 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0f585ebdb7c13617e81fd529603d249fb7888cbe5bb5212d 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.CkL 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0f585ebdb7c13617e81fd529603d249fb7888cbe5bb5212d 2 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0f585ebdb7c13617e81fd529603d249fb7888cbe5bb5212d 2 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0f585ebdb7c13617e81fd529603d249fb7888cbe5bb5212d 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.CkL 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.CkL 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.CkL 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=447821cf103635649d1073e5c3d48519 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.wqS 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 447821cf103635649d1073e5c3d48519 0 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 447821cf103635649d1073e5c3d48519 0 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=447821cf103635649d1073e5c3d48519 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:27.524 10:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.wqS 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.wqS 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.wqS 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bf380010102328cfc815c76cfa04424da9b097108ef6f0505fbb7d232c1958d1 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Fly 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bf380010102328cfc815c76cfa04424da9b097108ef6f0505fbb7d232c1958d1 3 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bf380010102328cfc815c76cfa04424da9b097108ef6f0505fbb7d232c1958d1 3 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bf380010102328cfc815c76cfa04424da9b097108ef6f0505fbb7d232c1958d1 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Fly 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Fly 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Fly 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1437718 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1437718 ']' 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:27.524 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wzQ 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.b1y ]] 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.b1y 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.w9d 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.qfb ]] 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qfb 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.y7v 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.n4O ]] 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.n4O 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.CkL 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.wqS ]] 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.wqS 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Fly 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.782 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:28.040 10:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:28.970 Waiting for block devices as requested 00:26:28.970 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:29.227 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:29.227 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:29.227 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:29.227 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:29.484 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:29.484 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:29.484 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:29.484 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:26:29.742 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:29.742 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:29.742 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:29.999 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:29.999 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:29.999 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:30.257 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:30.257 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:30.515 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:30.515 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:30.515 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:30.515 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:30.515 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:30.772 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:30.772 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:30.772 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:30.772 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:30.772 No valid GPT data, bailing 00:26:30.772 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:30.772 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:30.772 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:30.772 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:30.772 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:30.772 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:30.772 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:30.772 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:30.772 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:30.772 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:30.772 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:30.772 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:26:30.772 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:30.772 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:26:30.772 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:26:30.772 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:26:30.772 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:30.772 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:26:30.773 00:26:30.773 Discovery Log Number of Records 2, Generation counter 2 00:26:30.773 =====Discovery Log Entry 0====== 00:26:30.773 trtype: tcp 00:26:30.773 adrfam: ipv4 00:26:30.773 subtype: current discovery subsystem 00:26:30.773 treq: not specified, sq flow control disable supported 00:26:30.773 portid: 1 00:26:30.773 trsvcid: 4420 00:26:30.773 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:30.773 traddr: 10.0.0.1 00:26:30.773 eflags: none 00:26:30.773 sectype: none 00:26:30.773 =====Discovery Log Entry 1====== 00:26:30.773 trtype: tcp 00:26:30.773 adrfam: ipv4 00:26:30.773 subtype: nvme subsystem 00:26:30.773 treq: not specified, sq flow control disable supported 00:26:30.773 portid: 1 00:26:30.773 trsvcid: 4420 00:26:30.773 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:30.773 traddr: 10.0.0.1 00:26:30.773 eflags: none 00:26:30.773 sectype: none 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: ]] 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.773 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.030 nvme0n1 00:26:31.030 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.030 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.030 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.030 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.030 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.030 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.030 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.030 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.030 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.030 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.030 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.030 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:31.030 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:31.030 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.030 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:31.030 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: ]] 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.031 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.290 nvme0n1 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: ]] 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.290 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.547 nvme0n1 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: ]] 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.547 10:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.547 nvme0n1 00:26:31.547 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.547 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.547 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.547 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.547 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.547 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: ]] 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.805 nvme0n1 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.805 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.063 nvme0n1 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: ]] 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.063 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.321 nvme0n1 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: ]] 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:32.321 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:32.322 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.322 10:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.580 nvme0n1 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: ]] 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.580 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.839 nvme0n1 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: ]] 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.839 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.097 nvme0n1 00:26:33.097 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.097 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.097 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.097 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.097 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.097 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.097 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.097 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.097 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.097 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.097 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.097 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.097 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.098 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.356 nvme0n1 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: ]] 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.356 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.357 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.357 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.357 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.357 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.357 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.357 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.357 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.357 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:33.357 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.357 10:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.613 nvme0n1 00:26:33.613 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.613 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.613 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.613 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.613 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.613 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: ]] 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.911 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.170 nvme0n1 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: ]] 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.170 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.428 nvme0n1 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: ]] 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:34.428 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.429 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:34.429 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.429 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.429 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.429 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.429 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.429 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.429 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.429 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.429 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.429 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.429 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.429 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.429 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.429 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.429 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:34.429 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.429 10:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.685 nvme0n1 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.685 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.686 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.686 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.686 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.686 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.686 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.686 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.686 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.686 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.686 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.686 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.686 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.686 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:34.686 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.686 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.943 nvme0n1 00:26:34.943 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.943 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.943 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.943 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.943 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: ]] 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:35.201 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.202 10:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.766 nvme0n1 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: ]] 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.766 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.333 nvme0n1 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: ]] 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.333 10:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.899 nvme0n1 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: ]] 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.899 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.465 nvme0n1 00:26:37.465 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.465 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.465 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.465 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.465 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.465 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.465 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.465 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.465 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.465 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.465 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.465 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.465 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:37.465 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.465 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.465 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:37.465 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:37.465 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:37.465 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.466 10:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.723 nvme0n1 00:26:37.723 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.723 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.723 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.723 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.723 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.723 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: ]] 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.982 10:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.915 nvme0n1 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: ]] 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.916 10:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.850 nvme0n1 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: ]] 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.850 10:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.416 nvme0n1 00:26:40.416 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.416 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.416 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.416 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.416 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: ]] 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.675 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.611 nvme0n1 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.611 10:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.611 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.611 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.611 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.611 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.611 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.611 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.611 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.611 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.611 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.611 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.611 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.611 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.611 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:41.611 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.611 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.573 nvme0n1 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: ]] 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:42.573 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:42.574 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.574 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:42.574 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.574 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.574 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.574 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.574 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.574 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.574 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.574 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.574 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.574 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.574 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.574 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.574 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.574 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.574 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:42.574 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.574 10:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.574 nvme0n1 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: ]] 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.574 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.832 nvme0n1 00:26:42.832 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.832 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.832 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: ]] 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.833 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.091 nvme0n1 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: ]] 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.091 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.349 nvme0n1 00:26:43.349 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.349 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.349 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.349 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.349 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.349 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.349 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.349 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.349 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.349 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.349 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.350 nvme0n1 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.350 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.609 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.609 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.609 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.609 10:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: ]] 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.609 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.610 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:43.610 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.610 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.610 nvme0n1 00:26:43.610 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.610 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.610 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.610 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.610 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.610 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: ]] 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.868 nvme0n1 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.868 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.869 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.869 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.869 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.869 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: ]] 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:44.127 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.128 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.128 nvme0n1 00:26:44.128 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.128 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.128 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.128 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.128 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.128 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.128 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.128 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.128 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.128 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: ]] 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.386 nvme0n1 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.386 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.387 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.387 10:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.645 nvme0n1 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.645 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: ]] 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.904 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.163 nvme0n1 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: ]] 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.163 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.422 nvme0n1 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: ]] 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.422 10:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.681 nvme0n1 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: ]] 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.940 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.940 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.940 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.940 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.940 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.940 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.940 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.940 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.940 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.940 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.940 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.940 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:45.940 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.940 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.198 nvme0n1 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.198 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.456 nvme0n1 00:26:46.456 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.456 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.456 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.456 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.456 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.456 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.456 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.456 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.456 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.456 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.456 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.456 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:46.456 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.456 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:46.456 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.456 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.456 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:46.456 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:46.456 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:46.456 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:46.456 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.456 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: ]] 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.457 10:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.023 nvme0n1 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: ]] 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.023 10:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.590 nvme0n1 00:26:47.590 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.590 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.590 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.590 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.590 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.590 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.590 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.590 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.590 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.590 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.590 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.590 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.590 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:47.590 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.590 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.590 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:47.590 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:47.590 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:47.590 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:47.590 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.590 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:47.590 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:47.590 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: ]] 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.591 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.157 nvme0n1 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: ]] 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.157 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:48.158 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.158 10:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.724 nvme0n1 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.724 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.725 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:48.725 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.725 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.288 nvme0n1 00:26:49.288 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.288 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.288 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.288 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.288 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.288 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.288 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.288 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.288 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.288 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.288 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.288 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:49.288 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.288 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:49.288 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.288 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.288 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:49.288 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:49.288 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: ]] 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.289 10:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.222 nvme0n1 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: ]] 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.222 10:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.155 nvme0n1 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: ]] 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.155 10:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.090 nvme0n1 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: ]] 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:52.090 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:52.091 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.091 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:52.091 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:52.091 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:52.091 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.091 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:52.091 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.091 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.091 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.091 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.091 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.091 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.091 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.091 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.091 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.091 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.091 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.091 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.091 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.091 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.091 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:52.091 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.091 10:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.025 nvme0n1 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.025 10:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.960 nvme0n1 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: ]] 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.960 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.961 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.961 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.961 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.961 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.961 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.961 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.961 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.961 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.961 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.961 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.961 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.961 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:53.961 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.961 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.961 nvme0n1 00:26:53.961 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.961 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.961 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.961 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.961 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.961 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.219 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.219 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.219 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.219 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.219 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.219 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.219 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:54.219 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.219 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.219 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:54.219 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:54.219 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:54.219 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:54.219 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.219 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:54.219 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:54.219 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: ]] 00:26:54.219 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:54.219 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:54.219 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.220 nvme0n1 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.220 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.478 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.478 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.478 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:54.478 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.478 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.478 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:54.478 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:54.478 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:54.478 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: ]] 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.479 10:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.479 nvme0n1 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: ]] 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.479 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.738 nvme0n1 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.738 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.739 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.739 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.739 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.739 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.739 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.739 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.739 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.739 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.739 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.739 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.739 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:54.739 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.739 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.997 nvme0n1 00:26:54.997 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.997 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.997 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.997 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.997 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.997 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.997 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.997 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.997 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.997 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.997 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.997 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:54.997 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.997 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:54.997 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.997 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.997 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:54.997 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:54.997 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:54.997 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:54.997 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: ]] 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.998 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.256 nvme0n1 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: ]] 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:55.256 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.257 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:55.257 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.257 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.257 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.257 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.257 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.257 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.257 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.257 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.257 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.257 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.257 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.257 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.257 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.257 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.257 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:55.257 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.257 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.515 nvme0n1 00:26:55.515 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.515 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.515 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.515 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.515 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.515 10:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.515 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.515 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.515 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.515 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.515 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.515 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.515 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:55.515 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.515 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:55.515 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:55.515 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:55.515 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:55.515 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:55.515 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:55.515 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: ]] 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.516 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.774 nvme0n1 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: ]] 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.774 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.775 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.775 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.775 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.775 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.775 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.775 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.775 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.775 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:55.775 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.775 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.032 nvme0n1 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:56.032 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:56.033 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.033 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:56.033 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.033 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.033 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.033 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.033 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.033 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.033 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.033 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.033 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.033 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.033 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.033 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.033 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.033 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.033 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:56.033 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.033 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.291 nvme0n1 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: ]] 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.291 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.292 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:56.292 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.292 10:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.550 nvme0n1 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: ]] 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.550 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.551 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.551 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.551 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.551 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.551 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:56.551 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.551 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.809 nvme0n1 00:26:56.809 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.809 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.809 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.809 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.809 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.809 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: ]] 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.067 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.068 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.068 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.068 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.068 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.068 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.068 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.068 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:57.068 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.068 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.326 nvme0n1 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: ]] 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.326 10:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.585 nvme0n1 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.585 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.843 nvme0n1 00:26:57.843 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.843 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.843 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.843 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.843 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.843 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.101 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.101 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.101 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.101 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.101 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.101 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:58.101 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.101 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:58.101 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.101 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.101 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:58.101 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:58.101 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:58.101 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:58.101 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.101 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:58.101 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:26:58.101 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: ]] 00:26:58.101 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:26:58.101 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:58.101 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.101 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.101 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:58.102 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:58.102 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.102 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:58.102 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.102 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.102 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.102 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.102 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.102 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.102 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.102 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.102 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.102 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.102 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.102 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.102 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.102 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.102 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:58.102 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.102 10:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.667 nvme0n1 00:26:58.667 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.667 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.667 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.667 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.667 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.667 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.667 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.667 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.667 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.667 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.667 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.667 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.667 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:58.667 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.667 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.667 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:58.667 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:58.667 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:58.667 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: ]] 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.668 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.233 nvme0n1 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: ]] 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.233 10:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.491 nvme0n1 00:26:59.491 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.491 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: ]] 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.750 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.315 nvme0n1 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.315 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:00.316 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:00.316 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:00.316 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.316 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:00.316 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.316 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.316 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.316 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.316 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.316 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.316 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.316 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.316 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.316 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.316 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.316 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.316 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.316 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.316 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:00.316 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.316 10:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.882 nvme0n1 00:27:00.882 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.882 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.882 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.882 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.882 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.882 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.882 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjY0ZDBhMWRiNzZhMzAxN2VlODZiMTAzOTI3ZGQyNmZB1NRP: 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: ]] 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY3ZmFhYzY1OWI2YTIwNTVkZjM1MDUwZDNhNTA0YTBmODk5YmNhMDA2MzNhYjQwYTAxNThhYmU0ZjJkYTc3NIXM91A=: 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.883 10:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.816 nvme0n1 00:27:01.816 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.816 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.816 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.816 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.816 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.816 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.816 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.816 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.816 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.816 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.816 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.816 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.816 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:01.816 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.816 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.816 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: ]] 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.817 10:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.751 nvme0n1 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: ]] 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.751 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.318 nvme0n1 00:27:03.318 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.318 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.318 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGY1ODVlYmRiN2MxMzYxN2U4MWZkNTI5NjAzZDI0OWZiNzg4OGNiZTViYjUyMTJkXrWikw==: 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: ]] 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ3ODIxY2YxMDM2MzU2NDlkMTA3M2U1YzNkNDg1MTkaudC9: 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.576 10:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.576 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.576 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.576 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.576 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.576 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.576 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.576 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.576 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.576 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.576 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.576 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.576 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.576 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:03.576 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.576 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.511 nvme0n1 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmYzODAwMTAxMDIzMjhjZmM4MTVjNzZjZmEwNDQyNGRhOWIwOTcxMDhlZjZmMDUwNWZiYjdkMjMyYzE5NThkMVibn9k=: 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.511 10:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.445 nvme0n1 00:27:05.445 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.445 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.445 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.445 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.445 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.445 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.445 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.445 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.445 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.445 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.445 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.445 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:05.445 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.445 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.445 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:05.445 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: ]] 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.446 request: 00:27:05.446 { 00:27:05.446 "name": "nvme0", 00:27:05.446 "trtype": "tcp", 00:27:05.446 "traddr": "10.0.0.1", 00:27:05.446 "adrfam": "ipv4", 00:27:05.446 "trsvcid": "4420", 00:27:05.446 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:05.446 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:05.446 "prchk_reftag": false, 00:27:05.446 "prchk_guard": false, 00:27:05.446 "hdgst": false, 00:27:05.446 "ddgst": false, 00:27:05.446 "allow_unrecognized_csi": false, 00:27:05.446 "method": "bdev_nvme_attach_controller", 00:27:05.446 "req_id": 1 00:27:05.446 } 00:27:05.446 Got JSON-RPC error response 00:27:05.446 response: 00:27:05.446 { 00:27:05.446 "code": -5, 00:27:05.446 "message": "Input/output error" 00:27:05.446 } 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:05.446 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:05.447 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:05.447 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:05.447 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:05.447 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:05.447 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.447 10:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.447 request: 00:27:05.447 { 00:27:05.447 "name": "nvme0", 00:27:05.447 "trtype": "tcp", 00:27:05.447 "traddr": "10.0.0.1", 00:27:05.447 "adrfam": "ipv4", 00:27:05.447 "trsvcid": "4420", 00:27:05.447 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:05.447 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:05.447 "prchk_reftag": false, 00:27:05.447 "prchk_guard": false, 00:27:05.447 "hdgst": false, 00:27:05.447 "ddgst": false, 00:27:05.447 "dhchap_key": "key2", 00:27:05.447 "allow_unrecognized_csi": false, 00:27:05.447 "method": "bdev_nvme_attach_controller", 00:27:05.447 "req_id": 1 00:27:05.447 } 00:27:05.447 Got JSON-RPC error response 00:27:05.447 response: 00:27:05.447 { 00:27:05.447 "code": -5, 00:27:05.447 "message": "Input/output error" 00:27:05.447 } 00:27:05.447 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:05.447 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:05.447 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:05.447 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:05.447 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:05.447 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.447 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:05.447 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.447 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.447 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.705 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:05.705 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:05.705 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.705 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.705 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.705 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.705 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.705 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.705 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.705 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.705 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.705 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.705 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:05.705 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:05.705 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:05.705 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:05.705 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:05.705 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.706 request: 00:27:05.706 { 00:27:05.706 "name": "nvme0", 00:27:05.706 "trtype": "tcp", 00:27:05.706 "traddr": "10.0.0.1", 00:27:05.706 "adrfam": "ipv4", 00:27:05.706 "trsvcid": "4420", 00:27:05.706 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:05.706 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:05.706 "prchk_reftag": false, 00:27:05.706 "prchk_guard": false, 00:27:05.706 "hdgst": false, 00:27:05.706 "ddgst": false, 00:27:05.706 "dhchap_key": "key1", 00:27:05.706 "dhchap_ctrlr_key": "ckey2", 00:27:05.706 "allow_unrecognized_csi": false, 00:27:05.706 "method": "bdev_nvme_attach_controller", 00:27:05.706 "req_id": 1 00:27:05.706 } 00:27:05.706 Got JSON-RPC error response 00:27:05.706 response: 00:27:05.706 { 00:27:05.706 "code": -5, 00:27:05.706 "message": "Input/output error" 00:27:05.706 } 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.706 nvme0n1 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: ]] 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.706 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.964 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.964 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.964 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.964 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.965 request: 00:27:05.965 { 00:27:05.965 "name": "nvme0", 00:27:05.965 "dhchap_key": "key1", 00:27:05.965 "dhchap_ctrlr_key": "ckey2", 00:27:05.965 "method": "bdev_nvme_set_keys", 00:27:05.965 "req_id": 1 00:27:05.965 } 00:27:05.965 Got JSON-RPC error response 00:27:05.965 response: 00:27:05.965 { 00:27:05.965 "code": -13, 00:27:05.965 "message": "Permission denied" 00:27:05.965 } 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:05.965 10:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:07.339 10:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.339 10:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:07.339 10:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.339 10:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.339 10:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.339 10:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:07.339 10:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:08.273 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.273 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.273 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTExYjgyYjE0YzIyZmEyNTNiYmMxODQ0MDU4NmM2NTQwODM3ODQ1OTY0NDRlYWNj9aDAKA==: 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: ]] 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjY4ZmUwMDUwNjUyYzI4MjBjYWQwMWI1ODllM2M4YmM3NThkYmJlYWI3MTAwZDZjmUywmw==: 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.274 nvme0n1 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5YjUyNmI0OGE2ODZlOGM5MTk2ODMwNmYzN2U0YWbpgsHM: 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: ]] 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTdkMjI0YmI4YmEzNDE0ZDZkNmQ3YmY4MDM3NzdiMDkoecvi: 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.274 request: 00:27:08.274 { 00:27:08.274 "name": "nvme0", 00:27:08.274 "dhchap_key": "key2", 00:27:08.274 "dhchap_ctrlr_key": "ckey1", 00:27:08.274 "method": "bdev_nvme_set_keys", 00:27:08.274 "req_id": 1 00:27:08.274 } 00:27:08.274 Got JSON-RPC error response 00:27:08.274 response: 00:27:08.274 { 00:27:08.274 "code": -13, 00:27:08.274 "message": "Permission denied" 00:27:08.274 } 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:08.274 10:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:09.651 rmmod nvme_tcp 00:27:09.651 rmmod nvme_fabrics 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1437718 ']' 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1437718 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1437718 ']' 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1437718 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1437718 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1437718' 00:27:09.651 killing process with pid 1437718 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1437718 00:27:09.651 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1437718 00:27:09.651 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:09.651 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:09.651 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:09.651 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:09.651 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:09.651 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:09.651 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:09.651 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:09.651 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:09.651 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.651 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:09.651 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.238 10:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:12.238 10:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:12.238 10:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:12.238 10:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:12.238 10:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:12.238 10:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:12.238 10:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:12.238 10:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:12.238 10:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:12.238 10:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:12.238 10:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:12.238 10:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:12.238 10:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:13.175 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:13.175 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:13.175 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:13.175 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:13.175 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:13.175 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:13.175 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:13.175 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:13.175 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:13.175 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:13.175 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:13.175 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:13.175 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:13.175 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:13.175 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:13.175 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:14.111 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:27:14.111 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.wzQ /tmp/spdk.key-null.w9d /tmp/spdk.key-sha256.y7v /tmp/spdk.key-sha384.CkL /tmp/spdk.key-sha512.Fly /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:14.111 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:15.486 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:15.486 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:15.486 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:15.486 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:15.486 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:15.486 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:15.486 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:15.487 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:15.487 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:15.487 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:15.487 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:15.487 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:15.487 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:15.487 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:15.487 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:15.487 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:15.487 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:15.487 00:27:15.487 real 0m51.314s 00:27:15.487 user 0m48.728s 00:27:15.487 sys 0m6.400s 00:27:15.487 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:15.487 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.487 ************************************ 00:27:15.487 END TEST nvmf_auth_host 00:27:15.487 ************************************ 00:27:15.487 10:55:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:15.487 10:55:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:15.487 10:55:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:15.487 10:55:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:15.487 10:55:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.746 ************************************ 00:27:15.746 START TEST nvmf_digest 00:27:15.746 ************************************ 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:15.746 * Looking for test storage... 00:27:15.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:15.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.746 --rc genhtml_branch_coverage=1 00:27:15.746 --rc genhtml_function_coverage=1 00:27:15.746 --rc genhtml_legend=1 00:27:15.746 --rc geninfo_all_blocks=1 00:27:15.746 --rc geninfo_unexecuted_blocks=1 00:27:15.746 00:27:15.746 ' 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:15.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.746 --rc genhtml_branch_coverage=1 00:27:15.746 --rc genhtml_function_coverage=1 00:27:15.746 --rc genhtml_legend=1 00:27:15.746 --rc geninfo_all_blocks=1 00:27:15.746 --rc geninfo_unexecuted_blocks=1 00:27:15.746 00:27:15.746 ' 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:15.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.746 --rc genhtml_branch_coverage=1 00:27:15.746 --rc genhtml_function_coverage=1 00:27:15.746 --rc genhtml_legend=1 00:27:15.746 --rc geninfo_all_blocks=1 00:27:15.746 --rc geninfo_unexecuted_blocks=1 00:27:15.746 00:27:15.746 ' 00:27:15.746 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:15.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.746 --rc genhtml_branch_coverage=1 00:27:15.746 --rc genhtml_function_coverage=1 00:27:15.746 --rc genhtml_legend=1 00:27:15.746 --rc geninfo_all_blocks=1 00:27:15.747 --rc geninfo_unexecuted_blocks=1 00:27:15.747 00:27:15.747 ' 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:15.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:15.747 10:55:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:18.276 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:18.276 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:18.276 Found net devices under 0000:09:00.0: cvl_0_0 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:18.276 Found net devices under 0000:09:00.1: cvl_0_1 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:18.276 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:18.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:27:18.277 00:27:18.277 --- 10.0.0.2 ping statistics --- 00:27:18.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.277 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:18.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:27:18.277 00:27:18.277 --- 10.0.0.1 ping statistics --- 00:27:18.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.277 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:18.277 ************************************ 00:27:18.277 START TEST nvmf_digest_clean 00:27:18.277 ************************************ 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1447345 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1447345 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1447345 ']' 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:18.277 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:18.277 [2024-11-19 10:55:05.680098] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:27:18.277 [2024-11-19 10:55:05.680176] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.277 [2024-11-19 10:55:05.760916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.277 [2024-11-19 10:55:05.828648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.277 [2024-11-19 10:55:05.828708] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.277 [2024-11-19 10:55:05.828746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:18.277 [2024-11-19 10:55:05.828767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:18.277 [2024-11-19 10:55:05.828786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.277 [2024-11-19 10:55:05.829501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.535 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:18.535 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:18.535 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:18.535 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:18.535 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:18.535 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:18.535 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:18.535 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:18.535 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:18.535 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.535 10:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:18.535 null0 00:27:18.535 [2024-11-19 10:55:06.036788] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.535 [2024-11-19 10:55:06.060984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:18.536 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.536 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:18.536 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:18.536 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:18.536 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:18.536 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:18.536 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:18.536 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:18.536 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1447372 00:27:18.536 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:18.536 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1447372 /var/tmp/bperf.sock 00:27:18.536 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1447372 ']' 00:27:18.536 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:18.536 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:18.536 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:18.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:18.536 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:18.536 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:18.536 [2024-11-19 10:55:06.109775] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:27:18.536 [2024-11-19 10:55:06.109851] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1447372 ] 00:27:18.794 [2024-11-19 10:55:06.175953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.794 [2024-11-19 10:55:06.233597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.794 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:18.794 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:18.794 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:18.794 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:18.794 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:19.359 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:19.359 10:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:19.925 nvme0n1 00:27:19.925 10:55:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:19.925 10:55:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:19.925 Running I/O for 2 seconds... 00:27:21.790 18053.00 IOPS, 70.52 MiB/s [2024-11-19T09:55:09.671Z] 18072.50 IOPS, 70.60 MiB/s 00:27:22.048 Latency(us) 00:27:22.048 [2024-11-19T09:55:09.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:22.048 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:22.048 nvme0n1 : 2.01 18094.71 70.68 0.00 0.00 7062.32 3519.53 13786.83 00:27:22.048 [2024-11-19T09:55:09.671Z] =================================================================================================================== 00:27:22.048 [2024-11-19T09:55:09.671Z] Total : 18094.71 70.68 0.00 0.00 7062.32 3519.53 13786.83 00:27:22.048 { 00:27:22.048 "results": [ 00:27:22.048 { 00:27:22.048 "job": "nvme0n1", 00:27:22.048 "core_mask": "0x2", 00:27:22.048 "workload": "randread", 00:27:22.048 "status": "finished", 00:27:22.048 "queue_depth": 128, 00:27:22.048 "io_size": 4096, 00:27:22.048 "runtime": 2.007216, 00:27:22.048 "iops": 18094.714270910554, 00:27:22.048 "mibps": 70.68247762074435, 00:27:22.048 "io_failed": 0, 00:27:22.048 "io_timeout": 0, 00:27:22.048 "avg_latency_us": 7062.318989720999, 00:27:22.048 "min_latency_us": 3519.525925925926, 00:27:22.048 "max_latency_us": 13786.832592592593 00:27:22.048 } 00:27:22.048 ], 00:27:22.048 "core_count": 1 00:27:22.048 } 00:27:22.048 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:22.048 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:22.048 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:22.048 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:22.048 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:22.048 | select(.opcode=="crc32c") 00:27:22.048 | "\(.module_name) \(.executed)"' 00:27:22.306 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:22.306 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:22.306 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:22.306 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:22.306 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1447372 00:27:22.306 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1447372 ']' 00:27:22.306 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1447372 00:27:22.306 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:22.306 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:22.306 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1447372 00:27:22.306 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:22.306 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:22.306 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1447372' 00:27:22.306 killing process with pid 1447372 00:27:22.306 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1447372 00:27:22.306 Received shutdown signal, test time was about 2.000000 seconds 00:27:22.306 00:27:22.306 Latency(us) 00:27:22.306 [2024-11-19T09:55:09.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:22.306 [2024-11-19T09:55:09.929Z] =================================================================================================================== 00:27:22.306 [2024-11-19T09:55:09.929Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:22.306 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1447372 00:27:22.564 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:22.565 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:22.565 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:22.565 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:22.565 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:22.565 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:22.565 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:22.565 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1447897 00:27:22.565 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:22.565 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1447897 /var/tmp/bperf.sock 00:27:22.565 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1447897 ']' 00:27:22.565 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:22.565 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:22.565 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:22.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:22.565 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:22.565 10:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:22.565 [2024-11-19 10:55:10.032362] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:27:22.565 [2024-11-19 10:55:10.032458] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1447897 ] 00:27:22.565 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:22.565 Zero copy mechanism will not be used. 00:27:22.565 [2024-11-19 10:55:10.100799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.565 [2024-11-19 10:55:10.158078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.823 10:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:22.823 10:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:22.823 10:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:22.823 10:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:22.823 10:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:23.081 10:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:23.081 10:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:23.339 nvme0n1 00:27:23.596 10:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:23.596 10:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:23.596 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:23.596 Zero copy mechanism will not be used. 00:27:23.596 Running I/O for 2 seconds... 00:27:25.464 6212.00 IOPS, 776.50 MiB/s [2024-11-19T09:55:13.087Z] 5916.50 IOPS, 739.56 MiB/s 00:27:25.464 Latency(us) 00:27:25.464 [2024-11-19T09:55:13.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:25.464 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:25.464 nvme0n1 : 2.00 5913.65 739.21 0.00 0.00 2701.34 561.30 7233.23 00:27:25.464 [2024-11-19T09:55:13.087Z] =================================================================================================================== 00:27:25.464 [2024-11-19T09:55:13.087Z] Total : 5913.65 739.21 0.00 0.00 2701.34 561.30 7233.23 00:27:25.464 { 00:27:25.464 "results": [ 00:27:25.464 { 00:27:25.464 "job": "nvme0n1", 00:27:25.464 "core_mask": "0x2", 00:27:25.464 "workload": "randread", 00:27:25.464 "status": "finished", 00:27:25.464 "queue_depth": 16, 00:27:25.464 "io_size": 131072, 00:27:25.464 "runtime": 2.003671, 00:27:25.464 "iops": 5913.645503677999, 00:27:25.464 "mibps": 739.2056879597499, 00:27:25.464 "io_failed": 0, 00:27:25.464 "io_timeout": 0, 00:27:25.464 "avg_latency_us": 2701.3394177974074, 00:27:25.464 "min_latency_us": 561.3037037037037, 00:27:25.464 "max_latency_us": 7233.2325925925925 00:27:25.464 } 00:27:25.464 ], 00:27:25.464 "core_count": 1 00:27:25.464 } 00:27:25.722 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:25.722 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:25.722 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:25.722 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:25.722 | select(.opcode=="crc32c") 00:27:25.722 | "\(.module_name) \(.executed)"' 00:27:25.722 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:25.980 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:25.980 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:25.980 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:25.980 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:25.980 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1447897 00:27:25.980 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1447897 ']' 00:27:25.980 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1447897 00:27:25.980 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:25.980 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:25.980 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1447897 00:27:25.980 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:25.980 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:25.980 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1447897' 00:27:25.980 killing process with pid 1447897 00:27:25.980 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1447897 00:27:25.980 Received shutdown signal, test time was about 2.000000 seconds 00:27:25.980 00:27:25.980 Latency(us) 00:27:25.980 [2024-11-19T09:55:13.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:25.980 [2024-11-19T09:55:13.603Z] =================================================================================================================== 00:27:25.980 [2024-11-19T09:55:13.603Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:25.980 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1447897 00:27:26.239 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:26.239 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:26.239 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:26.239 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:26.239 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:26.239 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:26.239 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:26.239 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1448306 00:27:26.239 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:26.239 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1448306 /var/tmp/bperf.sock 00:27:26.239 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1448306 ']' 00:27:26.239 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:26.239 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:26.239 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:26.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:26.239 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:26.239 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:26.239 [2024-11-19 10:55:13.683017] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:27:26.239 [2024-11-19 10:55:13.683105] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1448306 ] 00:27:26.239 [2024-11-19 10:55:13.748388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.239 [2024-11-19 10:55:13.804969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.497 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:26.497 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:26.497 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:26.497 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:26.497 10:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:26.755 10:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:26.755 10:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:27.321 nvme0n1 00:27:27.321 10:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:27.321 10:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:27.322 Running I/O for 2 seconds... 00:27:29.629 17729.00 IOPS, 69.25 MiB/s [2024-11-19T09:55:17.252Z] 19244.00 IOPS, 75.17 MiB/s 00:27:29.629 Latency(us) 00:27:29.629 [2024-11-19T09:55:17.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.629 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:29.629 nvme0n1 : 2.01 19258.28 75.23 0.00 0.00 6637.50 2718.53 17282.09 00:27:29.629 [2024-11-19T09:55:17.252Z] =================================================================================================================== 00:27:29.629 [2024-11-19T09:55:17.252Z] Total : 19258.28 75.23 0.00 0.00 6637.50 2718.53 17282.09 00:27:29.629 { 00:27:29.629 "results": [ 00:27:29.629 { 00:27:29.629 "job": "nvme0n1", 00:27:29.629 "core_mask": "0x2", 00:27:29.629 "workload": "randwrite", 00:27:29.629 "status": "finished", 00:27:29.629 "queue_depth": 128, 00:27:29.629 "io_size": 4096, 00:27:29.629 "runtime": 2.006877, 00:27:29.629 "iops": 19258.28040283485, 00:27:29.629 "mibps": 75.22765782357364, 00:27:29.629 "io_failed": 0, 00:27:29.629 "io_timeout": 0, 00:27:29.629 "avg_latency_us": 6637.503071882459, 00:27:29.629 "min_latency_us": 2718.5303703703703, 00:27:29.629 "max_latency_us": 17282.085925925927 00:27:29.629 } 00:27:29.629 ], 00:27:29.629 "core_count": 1 00:27:29.629 } 00:27:29.629 10:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:29.629 10:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:29.629 10:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:29.629 10:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:29.629 | select(.opcode=="crc32c") 00:27:29.629 | "\(.module_name) \(.executed)"' 00:27:29.629 10:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:29.629 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:29.629 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:29.629 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:29.629 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:29.629 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1448306 00:27:29.629 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1448306 ']' 00:27:29.629 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1448306 00:27:29.629 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:29.629 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:29.629 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1448306 00:27:29.629 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:29.629 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:29.629 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1448306' 00:27:29.629 killing process with pid 1448306 00:27:29.629 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1448306 00:27:29.629 Received shutdown signal, test time was about 2.000000 seconds 00:27:29.629 00:27:29.629 Latency(us) 00:27:29.629 [2024-11-19T09:55:17.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.629 [2024-11-19T09:55:17.252Z] =================================================================================================================== 00:27:29.629 [2024-11-19T09:55:17.252Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:29.629 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1448306 00:27:29.886 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:29.886 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:29.887 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:29.887 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:29.887 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:29.887 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:29.887 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:29.887 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1448712 00:27:29.887 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:29.887 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1448712 /var/tmp/bperf.sock 00:27:29.887 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1448712 ']' 00:27:29.887 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:29.887 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:29.887 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:29.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:29.887 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:29.887 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:29.887 [2024-11-19 10:55:17.440246] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:27:29.887 [2024-11-19 10:55:17.440356] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1448712 ] 00:27:29.887 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:29.887 Zero copy mechanism will not be used. 00:27:30.143 [2024-11-19 10:55:17.510843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.143 [2024-11-19 10:55:17.572909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.143 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:30.144 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:30.144 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:30.144 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:30.144 10:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:30.708 10:55:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:30.708 10:55:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:30.965 nvme0n1 00:27:30.965 10:55:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:30.965 10:55:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:31.222 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:31.222 Zero copy mechanism will not be used. 00:27:31.222 Running I/O for 2 seconds... 00:27:33.084 5352.00 IOPS, 669.00 MiB/s [2024-11-19T09:55:20.707Z] 5435.50 IOPS, 679.44 MiB/s 00:27:33.084 Latency(us) 00:27:33.084 [2024-11-19T09:55:20.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.084 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:33.084 nvme0n1 : 2.00 5433.19 679.15 0.00 0.00 2934.57 2184.53 9806.13 00:27:33.084 [2024-11-19T09:55:20.707Z] =================================================================================================================== 00:27:33.084 [2024-11-19T09:55:20.707Z] Total : 5433.19 679.15 0.00 0.00 2934.57 2184.53 9806.13 00:27:33.084 { 00:27:33.084 "results": [ 00:27:33.084 { 00:27:33.084 "job": "nvme0n1", 00:27:33.084 "core_mask": "0x2", 00:27:33.084 "workload": "randwrite", 00:27:33.084 "status": "finished", 00:27:33.084 "queue_depth": 16, 00:27:33.084 "io_size": 131072, 00:27:33.084 "runtime": 2.003797, 00:27:33.084 "iops": 5433.185098091274, 00:27:33.084 "mibps": 679.1481372614093, 00:27:33.084 "io_failed": 0, 00:27:33.084 "io_timeout": 0, 00:27:33.084 "avg_latency_us": 2934.5747263640974, 00:27:33.084 "min_latency_us": 2184.5333333333333, 00:27:33.084 "max_latency_us": 9806.127407407408 00:27:33.084 } 00:27:33.084 ], 00:27:33.084 "core_count": 1 00:27:33.084 } 00:27:33.084 10:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:33.084 10:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:33.084 10:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:33.084 10:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:33.084 10:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:33.084 | select(.opcode=="crc32c") 00:27:33.084 | "\(.module_name) \(.executed)"' 00:27:33.342 10:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:33.342 10:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:33.342 10:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:33.342 10:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:33.342 10:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1448712 00:27:33.342 10:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1448712 ']' 00:27:33.342 10:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1448712 00:27:33.342 10:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:33.342 10:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:33.342 10:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1448712 00:27:33.342 10:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:33.342 10:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:33.342 10:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1448712' 00:27:33.342 killing process with pid 1448712 00:27:33.342 10:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1448712 00:27:33.342 Received shutdown signal, test time was about 2.000000 seconds 00:27:33.342 00:27:33.342 Latency(us) 00:27:33.342 [2024-11-19T09:55:20.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.342 [2024-11-19T09:55:20.965Z] =================================================================================================================== 00:27:33.342 [2024-11-19T09:55:20.965Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:33.342 10:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1448712 00:27:33.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1447345 00:27:33.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1447345 ']' 00:27:33.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1447345 00:27:33.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:33.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:33.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1447345 00:27:33.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:33.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:33.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1447345' 00:27:33.601 killing process with pid 1447345 00:27:33.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1447345 00:27:33.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1447345 00:27:33.859 00:27:33.859 real 0m15.822s 00:27:33.859 user 0m31.105s 00:27:33.859 sys 0m4.553s 00:27:33.859 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:33.859 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:33.859 ************************************ 00:27:33.859 END TEST nvmf_digest_clean 00:27:33.859 ************************************ 00:27:33.859 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:33.859 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:33.859 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:33.859 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:34.117 ************************************ 00:27:34.117 START TEST nvmf_digest_error 00:27:34.117 ************************************ 00:27:34.117 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:27:34.117 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:34.117 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:34.117 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:34.117 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:34.117 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1449267 00:27:34.117 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:34.117 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1449267 00:27:34.117 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1449267 ']' 00:27:34.117 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.117 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:34.117 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.117 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:34.117 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:34.117 [2024-11-19 10:55:21.559462] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:27:34.117 [2024-11-19 10:55:21.559541] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.117 [2024-11-19 10:55:21.630959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.117 [2024-11-19 10:55:21.686382] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.117 [2024-11-19 10:55:21.686436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.117 [2024-11-19 10:55:21.686466] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.117 [2024-11-19 10:55:21.686478] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.117 [2024-11-19 10:55:21.686488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.117 [2024-11-19 10:55:21.687066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:34.376 [2024-11-19 10:55:21.807780] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:34.376 null0 00:27:34.376 [2024-11-19 10:55:21.929792] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.376 [2024-11-19 10:55:21.954004] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1449292 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1449292 /var/tmp/bperf.sock 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1449292 ']' 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:34.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:34.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:34.635 [2024-11-19 10:55:22.007111] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:27:34.635 [2024-11-19 10:55:22.007194] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1449292 ] 00:27:34.635 [2024-11-19 10:55:22.077914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.635 [2024-11-19 10:55:22.140019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.635 10:55:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:34.635 10:55:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:34.635 10:55:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:34.635 10:55:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:35.200 10:55:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:35.200 10:55:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.200 10:55:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:35.200 10:55:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.200 10:55:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:35.200 10:55:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:35.458 nvme0n1 00:27:35.458 10:55:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:35.458 10:55:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.458 10:55:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:35.458 10:55:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.458 10:55:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:35.458 10:55:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:35.458 Running I/O for 2 seconds... 00:27:35.458 [2024-11-19 10:55:23.000929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.458 [2024-11-19 10:55:23.000982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.458 [2024-11-19 10:55:23.001006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.458 [2024-11-19 10:55:23.016433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.458 [2024-11-19 10:55:23.016465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.458 [2024-11-19 10:55:23.016498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.458 [2024-11-19 10:55:23.031964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.458 [2024-11-19 10:55:23.032010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.458 [2024-11-19 10:55:23.032037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.458 [2024-11-19 10:55:23.047380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.458 [2024-11-19 10:55:23.047414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.458 [2024-11-19 10:55:23.047448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.458 [2024-11-19 10:55:23.058987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.458 [2024-11-19 10:55:23.059015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.458 [2024-11-19 10:55:23.059047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.458 [2024-11-19 10:55:23.073560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.458 [2024-11-19 10:55:23.073592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.458 [2024-11-19 10:55:23.073610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.716 [2024-11-19 10:55:23.089339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.716 [2024-11-19 10:55:23.089383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.716 [2024-11-19 10:55:23.089402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.716 [2024-11-19 10:55:23.099559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.716 [2024-11-19 10:55:23.099605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.716 [2024-11-19 10:55:23.099622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.716 [2024-11-19 10:55:23.113703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.716 [2024-11-19 10:55:23.113732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.716 [2024-11-19 10:55:23.113764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.716 [2024-11-19 10:55:23.127096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.716 [2024-11-19 10:55:23.127129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.716 [2024-11-19 10:55:23.127148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.716 [2024-11-19 10:55:23.139374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.716 [2024-11-19 10:55:23.139405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.716 [2024-11-19 10:55:23.139439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.716 [2024-11-19 10:55:23.152829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.716 [2024-11-19 10:55:23.152866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.716 [2024-11-19 10:55:23.152898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.716 [2024-11-19 10:55:23.168003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.716 [2024-11-19 10:55:23.168032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.716 [2024-11-19 10:55:23.168065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.716 [2024-11-19 10:55:23.182613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.716 [2024-11-19 10:55:23.182643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.716 [2024-11-19 10:55:23.182676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.716 [2024-11-19 10:55:23.196469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.716 [2024-11-19 10:55:23.196501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.717 [2024-11-19 10:55:23.196519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.717 [2024-11-19 10:55:23.209230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.717 [2024-11-19 10:55:23.209263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.717 [2024-11-19 10:55:23.209295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.717 [2024-11-19 10:55:23.221467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.717 [2024-11-19 10:55:23.221499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.717 [2024-11-19 10:55:23.221517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.717 [2024-11-19 10:55:23.233342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.717 [2024-11-19 10:55:23.233374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.717 [2024-11-19 10:55:23.233409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.717 [2024-11-19 10:55:23.249040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.717 [2024-11-19 10:55:23.249072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.717 [2024-11-19 10:55:23.249090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.717 [2024-11-19 10:55:23.260891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.717 [2024-11-19 10:55:23.260923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.717 [2024-11-19 10:55:23.260941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.717 [2024-11-19 10:55:23.273876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.717 [2024-11-19 10:55:23.273907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.717 [2024-11-19 10:55:23.273941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.717 [2024-11-19 10:55:23.288019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.717 [2024-11-19 10:55:23.288048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.717 [2024-11-19 10:55:23.288080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.717 [2024-11-19 10:55:23.300847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.717 [2024-11-19 10:55:23.300895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.717 [2024-11-19 10:55:23.300913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.717 [2024-11-19 10:55:23.313574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.717 [2024-11-19 10:55:23.313618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.717 [2024-11-19 10:55:23.313635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.717 [2024-11-19 10:55:23.325686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.717 [2024-11-19 10:55:23.325716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.717 [2024-11-19 10:55:23.325748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.976 [2024-11-19 10:55:23.341320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.976 [2024-11-19 10:55:23.341355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.976 [2024-11-19 10:55:23.341374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.976 [2024-11-19 10:55:23.355581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.976 [2024-11-19 10:55:23.355624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.976 [2024-11-19 10:55:23.355643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.976 [2024-11-19 10:55:23.367517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.976 [2024-11-19 10:55:23.367548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.976 [2024-11-19 10:55:23.367580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.976 [2024-11-19 10:55:23.379998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.976 [2024-11-19 10:55:23.380027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.976 [2024-11-19 10:55:23.380067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.976 [2024-11-19 10:55:23.394311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.976 [2024-11-19 10:55:23.394358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.976 [2024-11-19 10:55:23.394376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.976 [2024-11-19 10:55:23.406544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.976 [2024-11-19 10:55:23.406574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.976 [2024-11-19 10:55:23.406606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.976 [2024-11-19 10:55:23.420431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.976 [2024-11-19 10:55:23.420461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.976 [2024-11-19 10:55:23.420492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.976 [2024-11-19 10:55:23.433795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.976 [2024-11-19 10:55:23.433826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.976 [2024-11-19 10:55:23.433843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.976 [2024-11-19 10:55:23.445827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.976 [2024-11-19 10:55:23.445857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.976 [2024-11-19 10:55:23.445890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.976 [2024-11-19 10:55:23.458923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.976 [2024-11-19 10:55:23.458955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.976 [2024-11-19 10:55:23.458973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.976 [2024-11-19 10:55:23.475451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.976 [2024-11-19 10:55:23.475483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.976 [2024-11-19 10:55:23.475501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.976 [2024-11-19 10:55:23.488708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.976 [2024-11-19 10:55:23.488740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.976 [2024-11-19 10:55:23.488758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.976 [2024-11-19 10:55:23.501851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.976 [2024-11-19 10:55:23.501890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.976 [2024-11-19 10:55:23.501908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.976 [2024-11-19 10:55:23.512677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.976 [2024-11-19 10:55:23.512705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.976 [2024-11-19 10:55:23.512737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.976 [2024-11-19 10:55:23.529347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.976 [2024-11-19 10:55:23.529379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.976 [2024-11-19 10:55:23.529397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.976 [2024-11-19 10:55:23.543178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.976 [2024-11-19 10:55:23.543207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.976 [2024-11-19 10:55:23.543239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.976 [2024-11-19 10:55:23.556681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.976 [2024-11-19 10:55:23.556712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.976 [2024-11-19 10:55:23.556730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.976 [2024-11-19 10:55:23.569668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.976 [2024-11-19 10:55:23.569698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.976 [2024-11-19 10:55:23.569731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.976 [2024-11-19 10:55:23.583187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:35.977 [2024-11-19 10:55:23.583220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.977 [2024-11-19 10:55:23.583238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.235 [2024-11-19 10:55:23.598405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.235 [2024-11-19 10:55:23.598446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.235 [2024-11-19 10:55:23.598465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.235 [2024-11-19 10:55:23.613289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.235 [2024-11-19 10:55:23.613326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.235 [2024-11-19 10:55:23.613345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.235 [2024-11-19 10:55:23.625389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.235 [2024-11-19 10:55:23.625419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.235 [2024-11-19 10:55:23.625436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.235 [2024-11-19 10:55:23.640941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.235 [2024-11-19 10:55:23.640970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.235 [2024-11-19 10:55:23.640986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.235 [2024-11-19 10:55:23.656381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.235 [2024-11-19 10:55:23.656412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.235 [2024-11-19 10:55:23.656430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.235 [2024-11-19 10:55:23.667795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.235 [2024-11-19 10:55:23.667823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.235 [2024-11-19 10:55:23.667841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.235 [2024-11-19 10:55:23.681017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.235 [2024-11-19 10:55:23.681061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.235 [2024-11-19 10:55:23.681080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.235 [2024-11-19 10:55:23.693704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.235 [2024-11-19 10:55:23.693734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.235 [2024-11-19 10:55:23.693752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.235 [2024-11-19 10:55:23.707740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.235 [2024-11-19 10:55:23.707770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.235 [2024-11-19 10:55:23.707802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.235 [2024-11-19 10:55:23.723100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.235 [2024-11-19 10:55:23.723131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.236 [2024-11-19 10:55:23.723149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.236 [2024-11-19 10:55:23.737360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.236 [2024-11-19 10:55:23.737398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.236 [2024-11-19 10:55:23.737417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.236 [2024-11-19 10:55:23.754115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.236 [2024-11-19 10:55:23.754148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.236 [2024-11-19 10:55:23.754165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.236 [2024-11-19 10:55:23.764787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.236 [2024-11-19 10:55:23.764817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.236 [2024-11-19 10:55:23.764835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.236 [2024-11-19 10:55:23.779631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.236 [2024-11-19 10:55:23.779660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.236 [2024-11-19 10:55:23.779677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.236 [2024-11-19 10:55:23.796657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.236 [2024-11-19 10:55:23.796689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.236 [2024-11-19 10:55:23.796708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.236 [2024-11-19 10:55:23.807145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.236 [2024-11-19 10:55:23.807174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.236 [2024-11-19 10:55:23.807190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.236 [2024-11-19 10:55:23.823137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.236 [2024-11-19 10:55:23.823167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.236 [2024-11-19 10:55:23.823184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.236 [2024-11-19 10:55:23.836235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.236 [2024-11-19 10:55:23.836265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.236 [2024-11-19 10:55:23.836283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.236 [2024-11-19 10:55:23.851199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.236 [2024-11-19 10:55:23.851229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.236 [2024-11-19 10:55:23.851248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.495 [2024-11-19 10:55:23.862231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.495 [2024-11-19 10:55:23.862261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.495 [2024-11-19 10:55:23.862279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.495 [2024-11-19 10:55:23.878673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.495 [2024-11-19 10:55:23.878706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.495 [2024-11-19 10:55:23.878724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.495 [2024-11-19 10:55:23.893311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.495 [2024-11-19 10:55:23.893345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.495 [2024-11-19 10:55:23.893364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.495 [2024-11-19 10:55:23.905866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.495 [2024-11-19 10:55:23.905896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.495 [2024-11-19 10:55:23.905913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.495 [2024-11-19 10:55:23.920452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.495 [2024-11-19 10:55:23.920481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.495 [2024-11-19 10:55:23.920499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.495 [2024-11-19 10:55:23.936758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.495 [2024-11-19 10:55:23.936789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.495 [2024-11-19 10:55:23.936806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.495 [2024-11-19 10:55:23.951602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.495 [2024-11-19 10:55:23.951633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.495 [2024-11-19 10:55:23.951651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.495 [2024-11-19 10:55:23.963175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.495 [2024-11-19 10:55:23.963221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.495 [2024-11-19 10:55:23.963238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.495 [2024-11-19 10:55:23.981616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.495 [2024-11-19 10:55:23.981650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.495 [2024-11-19 10:55:23.981679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.495 18423.00 IOPS, 71.96 MiB/s [2024-11-19T09:55:24.118Z] [2024-11-19 10:55:23.997102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.495 [2024-11-19 10:55:23.997138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.495 [2024-11-19 10:55:23.997156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.495 [2024-11-19 10:55:24.012173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.495 [2024-11-19 10:55:24.012207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.495 [2024-11-19 10:55:24.012226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.495 [2024-11-19 10:55:24.023586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.495 [2024-11-19 10:55:24.023630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.495 [2024-11-19 10:55:24.023647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.495 [2024-11-19 10:55:24.037406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.495 [2024-11-19 10:55:24.037443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.495 [2024-11-19 10:55:24.037461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.495 [2024-11-19 10:55:24.051403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.495 [2024-11-19 10:55:24.051432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.495 [2024-11-19 10:55:24.051462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.495 [2024-11-19 10:55:24.063977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.495 [2024-11-19 10:55:24.064007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.495 [2024-11-19 10:55:24.064039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.495 [2024-11-19 10:55:24.077645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.495 [2024-11-19 10:55:24.077676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.495 [2024-11-19 10:55:24.077693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.495 [2024-11-19 10:55:24.089736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.495 [2024-11-19 10:55:24.089768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.495 [2024-11-19 10:55:24.089787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.496 [2024-11-19 10:55:24.103487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.496 [2024-11-19 10:55:24.103536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.496 [2024-11-19 10:55:24.103553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.754 [2024-11-19 10:55:24.116734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.754 [2024-11-19 10:55:24.116763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.754 [2024-11-19 10:55:24.116795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.754 [2024-11-19 10:55:24.129882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.754 [2024-11-19 10:55:24.129928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.754 [2024-11-19 10:55:24.129946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.754 [2024-11-19 10:55:24.140814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.754 [2024-11-19 10:55:24.140843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.754 [2024-11-19 10:55:24.140874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.754 [2024-11-19 10:55:24.155405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.754 [2024-11-19 10:55:24.155434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.754 [2024-11-19 10:55:24.155465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.754 [2024-11-19 10:55:24.167607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.754 [2024-11-19 10:55:24.167652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.754 [2024-11-19 10:55:24.167669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.754 [2024-11-19 10:55:24.182685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.754 [2024-11-19 10:55:24.182729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.754 [2024-11-19 10:55:24.182747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.754 [2024-11-19 10:55:24.194797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.755 [2024-11-19 10:55:24.194825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.755 [2024-11-19 10:55:24.194856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.755 [2024-11-19 10:55:24.209321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.755 [2024-11-19 10:55:24.209350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.755 [2024-11-19 10:55:24.209380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.755 [2024-11-19 10:55:24.224142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.755 [2024-11-19 10:55:24.224173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.755 [2024-11-19 10:55:24.224190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.755 [2024-11-19 10:55:24.236764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.755 [2024-11-19 10:55:24.236794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.755 [2024-11-19 10:55:24.236811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.755 [2024-11-19 10:55:24.248863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.755 [2024-11-19 10:55:24.248892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.755 [2024-11-19 10:55:24.248923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.755 [2024-11-19 10:55:24.262108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.755 [2024-11-19 10:55:24.262158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.755 [2024-11-19 10:55:24.262176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.755 [2024-11-19 10:55:24.274710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.755 [2024-11-19 10:55:24.274740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.755 [2024-11-19 10:55:24.274772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.755 [2024-11-19 10:55:24.290580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.755 [2024-11-19 10:55:24.290611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.755 [2024-11-19 10:55:24.290629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.755 [2024-11-19 10:55:24.305228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.755 [2024-11-19 10:55:24.305274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.755 [2024-11-19 10:55:24.305291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.755 [2024-11-19 10:55:24.319892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.755 [2024-11-19 10:55:24.319923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.755 [2024-11-19 10:55:24.319941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.755 [2024-11-19 10:55:24.331128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.755 [2024-11-19 10:55:24.331157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.755 [2024-11-19 10:55:24.331193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.755 [2024-11-19 10:55:24.347291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.755 [2024-11-19 10:55:24.347351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.755 [2024-11-19 10:55:24.347369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.755 [2024-11-19 10:55:24.361576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:36.755 [2024-11-19 10:55:24.361608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.755 [2024-11-19 10:55:24.361626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.013 [2024-11-19 10:55:24.377092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.013 [2024-11-19 10:55:24.377125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.013 [2024-11-19 10:55:24.377143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.013 [2024-11-19 10:55:24.393007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.013 [2024-11-19 10:55:24.393040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.013 [2024-11-19 10:55:24.393058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.013 [2024-11-19 10:55:24.405155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.013 [2024-11-19 10:55:24.405184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.013 [2024-11-19 10:55:24.405216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.013 [2024-11-19 10:55:24.418103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.013 [2024-11-19 10:55:24.418134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.013 [2024-11-19 10:55:24.418166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.013 [2024-11-19 10:55:24.430571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.013 [2024-11-19 10:55:24.430602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.013 [2024-11-19 10:55:24.430619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.013 [2024-11-19 10:55:24.446946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.013 [2024-11-19 10:55:24.446995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.013 [2024-11-19 10:55:24.447013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.013 [2024-11-19 10:55:24.460482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.013 [2024-11-19 10:55:24.460514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.013 [2024-11-19 10:55:24.460532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.013 [2024-11-19 10:55:24.473134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.013 [2024-11-19 10:55:24.473166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.013 [2024-11-19 10:55:24.473184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.013 [2024-11-19 10:55:24.485283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.013 [2024-11-19 10:55:24.485325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.014 [2024-11-19 10:55:24.485352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.014 [2024-11-19 10:55:24.498854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.014 [2024-11-19 10:55:24.498883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.014 [2024-11-19 10:55:24.498914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.014 [2024-11-19 10:55:24.511337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.014 [2024-11-19 10:55:24.511365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.014 [2024-11-19 10:55:24.511397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.014 [2024-11-19 10:55:24.527383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.014 [2024-11-19 10:55:24.527412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.014 [2024-11-19 10:55:24.527443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.014 [2024-11-19 10:55:24.542620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.014 [2024-11-19 10:55:24.542652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.014 [2024-11-19 10:55:24.542670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.014 [2024-11-19 10:55:24.557972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.014 [2024-11-19 10:55:24.558003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.014 [2024-11-19 10:55:24.558021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.014 [2024-11-19 10:55:24.569480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.014 [2024-11-19 10:55:24.569524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.014 [2024-11-19 10:55:24.569545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.014 [2024-11-19 10:55:24.584235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.014 [2024-11-19 10:55:24.584279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.014 [2024-11-19 10:55:24.584295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.014 [2024-11-19 10:55:24.600002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.014 [2024-11-19 10:55:24.600030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.014 [2024-11-19 10:55:24.600061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.014 [2024-11-19 10:55:24.612709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.014 [2024-11-19 10:55:24.612737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.014 [2024-11-19 10:55:24.612769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.014 [2024-11-19 10:55:24.625292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.014 [2024-11-19 10:55:24.625340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.014 [2024-11-19 10:55:24.625365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.271 [2024-11-19 10:55:24.638466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.271 [2024-11-19 10:55:24.638497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.271 [2024-11-19 10:55:24.638530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.271 [2024-11-19 10:55:24.653713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.271 [2024-11-19 10:55:24.653743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.271 [2024-11-19 10:55:24.653760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.272 [2024-11-19 10:55:24.665919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.272 [2024-11-19 10:55:24.665949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.272 [2024-11-19 10:55:24.665981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.272 [2024-11-19 10:55:24.680058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.272 [2024-11-19 10:55:24.680087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.272 [2024-11-19 10:55:24.680118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.272 [2024-11-19 10:55:24.694713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.272 [2024-11-19 10:55:24.694750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.272 [2024-11-19 10:55:24.694782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.272 [2024-11-19 10:55:24.710186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.272 [2024-11-19 10:55:24.710214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.272 [2024-11-19 10:55:24.710245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.272 [2024-11-19 10:55:24.725601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.272 [2024-11-19 10:55:24.725630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.272 [2024-11-19 10:55:24.725646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.272 [2024-11-19 10:55:24.740974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.272 [2024-11-19 10:55:24.741021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.272 [2024-11-19 10:55:24.741039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.272 [2024-11-19 10:55:24.756717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.272 [2024-11-19 10:55:24.756749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.272 [2024-11-19 10:55:24.756767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.272 [2024-11-19 10:55:24.767841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.272 [2024-11-19 10:55:24.767889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.272 [2024-11-19 10:55:24.767907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.272 [2024-11-19 10:55:24.783870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.272 [2024-11-19 10:55:24.783898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.272 [2024-11-19 10:55:24.783929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.272 [2024-11-19 10:55:24.794326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.272 [2024-11-19 10:55:24.794370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.272 [2024-11-19 10:55:24.794387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.272 [2024-11-19 10:55:24.808979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.272 [2024-11-19 10:55:24.809007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.272 [2024-11-19 10:55:24.809038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.272 [2024-11-19 10:55:24.825806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.272 [2024-11-19 10:55:24.825838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.272 [2024-11-19 10:55:24.825856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.272 [2024-11-19 10:55:24.841295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.272 [2024-11-19 10:55:24.841336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.272 [2024-11-19 10:55:24.841370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.272 [2024-11-19 10:55:24.855660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.272 [2024-11-19 10:55:24.855688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.272 [2024-11-19 10:55:24.855719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.272 [2024-11-19 10:55:24.868271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.272 [2024-11-19 10:55:24.868299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.272 [2024-11-19 10:55:24.868337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.272 [2024-11-19 10:55:24.880940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.272 [2024-11-19 10:55:24.880968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.272 [2024-11-19 10:55:24.880998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.530 [2024-11-19 10:55:24.894738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.530 [2024-11-19 10:55:24.894771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.530 [2024-11-19 10:55:24.894789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.530 [2024-11-19 10:55:24.909494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.530 [2024-11-19 10:55:24.909524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.530 [2024-11-19 10:55:24.909542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.530 [2024-11-19 10:55:24.921267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.530 [2024-11-19 10:55:24.921295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.530 [2024-11-19 10:55:24.921334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.530 [2024-11-19 10:55:24.935155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.530 [2024-11-19 10:55:24.935187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.530 [2024-11-19 10:55:24.935226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.530 [2024-11-19 10:55:24.950102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.530 [2024-11-19 10:55:24.950130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.531 [2024-11-19 10:55:24.950161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.531 [2024-11-19 10:55:24.966728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.531 [2024-11-19 10:55:24.966760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.531 [2024-11-19 10:55:24.966778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.531 [2024-11-19 10:55:24.981828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee9bf0) 00:27:37.531 [2024-11-19 10:55:24.981856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.531 [2024-11-19 10:55:24.981872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.531 18397.00 IOPS, 71.86 MiB/s 00:27:37.531 Latency(us) 00:27:37.531 [2024-11-19T09:55:25.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.531 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:37.531 nvme0n1 : 2.00 18418.67 71.95 0.00 0.00 6941.55 3568.07 21651.15 00:27:37.531 [2024-11-19T09:55:25.154Z] =================================================================================================================== 00:27:37.531 [2024-11-19T09:55:25.154Z] Total : 18418.67 71.95 0.00 0.00 6941.55 3568.07 21651.15 00:27:37.531 { 00:27:37.531 "results": [ 00:27:37.531 { 00:27:37.531 "job": "nvme0n1", 00:27:37.531 "core_mask": "0x2", 00:27:37.531 "workload": "randread", 00:27:37.531 "status": "finished", 00:27:37.531 "queue_depth": 128, 00:27:37.531 "io_size": 4096, 00:27:37.531 "runtime": 2.004596, 00:27:37.531 "iops": 18418.673887406738, 00:27:37.531 "mibps": 71.94794487268257, 00:27:37.531 "io_failed": 0, 00:27:37.531 "io_timeout": 0, 00:27:37.531 "avg_latency_us": 6941.552248804787, 00:27:37.531 "min_latency_us": 3568.071111111111, 00:27:37.531 "max_latency_us": 21651.152592592593 00:27:37.531 } 00:27:37.531 ], 00:27:37.531 "core_count": 1 00:27:37.531 } 00:27:37.531 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:37.531 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:37.531 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:37.531 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:37.531 | .driver_specific 00:27:37.531 | .nvme_error 00:27:37.531 | .status_code 00:27:37.531 | .command_transient_transport_error' 00:27:37.788 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:27:37.789 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1449292 00:27:37.789 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1449292 ']' 00:27:37.789 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1449292 00:27:37.789 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:37.789 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:37.789 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1449292 00:27:37.789 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:37.789 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:37.789 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1449292' 00:27:37.789 killing process with pid 1449292 00:27:37.789 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1449292 00:27:37.789 Received shutdown signal, test time was about 2.000000 seconds 00:27:37.789 00:27:37.789 Latency(us) 00:27:37.789 [2024-11-19T09:55:25.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.789 [2024-11-19T09:55:25.412Z] =================================================================================================================== 00:27:37.789 [2024-11-19T09:55:25.412Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:37.789 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1449292 00:27:38.047 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:38.047 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:38.047 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:38.047 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:38.047 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:38.047 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1449733 00:27:38.047 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:38.047 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1449733 /var/tmp/bperf.sock 00:27:38.047 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1449733 ']' 00:27:38.047 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:38.047 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:38.047 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:38.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:38.047 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:38.047 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:38.047 [2024-11-19 10:55:25.592929] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:27:38.047 [2024-11-19 10:55:25.593023] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1449733 ] 00:27:38.047 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:38.047 Zero copy mechanism will not be used. 00:27:38.047 [2024-11-19 10:55:25.665571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.306 [2024-11-19 10:55:25.725244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.306 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:38.306 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:38.306 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:38.306 10:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:38.564 10:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:38.564 10:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.564 10:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:38.564 10:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.564 10:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:38.564 10:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:39.130 nvme0n1 00:27:39.130 10:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:39.130 10:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.130 10:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:39.130 10:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.130 10:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:39.130 10:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:39.130 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:39.130 Zero copy mechanism will not be used. 00:27:39.130 Running I/O for 2 seconds... 00:27:39.130 [2024-11-19 10:55:26.738368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.130 [2024-11-19 10:55:26.738420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.130 [2024-11-19 10:55:26.738443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.130 [2024-11-19 10:55:26.744069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.130 [2024-11-19 10:55:26.744105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.130 [2024-11-19 10:55:26.744123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.130 [2024-11-19 10:55:26.748314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.130 [2024-11-19 10:55:26.748348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.130 [2024-11-19 10:55:26.748376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.389 [2024-11-19 10:55:26.752919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.389 [2024-11-19 10:55:26.752951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.389 [2024-11-19 10:55:26.752970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.389 [2024-11-19 10:55:26.759178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.389 [2024-11-19 10:55:26.759211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.389 [2024-11-19 10:55:26.759230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.389 [2024-11-19 10:55:26.765640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.389 [2024-11-19 10:55:26.765687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.389 [2024-11-19 10:55:26.765705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.389 [2024-11-19 10:55:26.771836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.389 [2024-11-19 10:55:26.771868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.389 [2024-11-19 10:55:26.771886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.389 [2024-11-19 10:55:26.777417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.389 [2024-11-19 10:55:26.777448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.389 [2024-11-19 10:55:26.777466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.389 [2024-11-19 10:55:26.782152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.389 [2024-11-19 10:55:26.782183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.389 [2024-11-19 10:55:26.782200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.389 [2024-11-19 10:55:26.786681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.389 [2024-11-19 10:55:26.786712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.389 [2024-11-19 10:55:26.786730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.389 [2024-11-19 10:55:26.791397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.389 [2024-11-19 10:55:26.791428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.389 [2024-11-19 10:55:26.791445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.389 [2024-11-19 10:55:26.795968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.389 [2024-11-19 10:55:26.795999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.389 [2024-11-19 10:55:26.796016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.389 [2024-11-19 10:55:26.800559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.389 [2024-11-19 10:55:26.800588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.389 [2024-11-19 10:55:26.800636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.389 [2024-11-19 10:55:26.805327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.389 [2024-11-19 10:55:26.805357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.389 [2024-11-19 10:55:26.805374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.389 [2024-11-19 10:55:26.810661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.389 [2024-11-19 10:55:26.810693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.389 [2024-11-19 10:55:26.810710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.389 [2024-11-19 10:55:26.815859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.389 [2024-11-19 10:55:26.815903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.389 [2024-11-19 10:55:26.815920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.389 [2024-11-19 10:55:26.820552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.389 [2024-11-19 10:55:26.820582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.389 [2024-11-19 10:55:26.820600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.389 [2024-11-19 10:55:26.825610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.389 [2024-11-19 10:55:26.825641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.389 [2024-11-19 10:55:26.825658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.389 [2024-11-19 10:55:26.830920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.389 [2024-11-19 10:55:26.830951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.389 [2024-11-19 10:55:26.830969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.390 [2024-11-19 10:55:26.836479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.390 [2024-11-19 10:55:26.836510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.390 [2024-11-19 10:55:26.836527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.390 [2024-11-19 10:55:26.843877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.390 [2024-11-19 10:55:26.843908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.390 [2024-11-19 10:55:26.843926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.390 [2024-11-19 10:55:26.851576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.390 [2024-11-19 10:55:26.851614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.390 [2024-11-19 10:55:26.851633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.390 [2024-11-19 10:55:26.857448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.390 [2024-11-19 10:55:26.857479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.390 [2024-11-19 10:55:26.857497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.390 [2024-11-19 10:55:26.863186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.390 [2024-11-19 10:55:26.863217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.390 [2024-11-19 10:55:26.863235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.390 [2024-11-19 10:55:26.869079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.390 [2024-11-19 10:55:26.869110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.390 [2024-11-19 10:55:26.869128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.390 [2024-11-19 10:55:26.876443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.390 [2024-11-19 10:55:26.876475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.390 [2024-11-19 10:55:26.876493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.390 [2024-11-19 10:55:26.883363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.390 [2024-11-19 10:55:26.883395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.390 [2024-11-19 10:55:26.883412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.390 [2024-11-19 10:55:26.888859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.390 [2024-11-19 10:55:26.888890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.390 [2024-11-19 10:55:26.888909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.390 [2024-11-19 10:55:26.894631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.390 [2024-11-19 10:55:26.894662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.390 [2024-11-19 10:55:26.894681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.390 [2024-11-19 10:55:26.899813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.390 [2024-11-19 10:55:26.899844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.390 [2024-11-19 10:55:26.899862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.390 [2024-11-19 10:55:26.905084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.390 [2024-11-19 10:55:26.905115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.390 [2024-11-19 10:55:26.905133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.390 [2024-11-19 10:55:26.908537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.390 [2024-11-19 10:55:26.908569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.390 [2024-11-19 10:55:26.908586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.390 [2024-11-19 10:55:26.912662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.390 [2024-11-19 10:55:26.912694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.390 [2024-11-19 10:55:26.912712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.390 [2024-11-19 10:55:26.918332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.390 [2024-11-19 10:55:26.918378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.390 [2024-11-19 10:55:26.918397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.390 [2024-11-19 10:55:26.923655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.390 [2024-11-19 10:55:26.923701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.390 [2024-11-19 10:55:26.923719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.390 [2024-11-19 10:55:26.928693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.390 [2024-11-19 10:55:26.928723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.390 [2024-11-19 10:55:26.928754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.390 [2024-11-19 10:55:26.934866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.390 [2024-11-19 10:55:26.934912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.390 [2024-11-19 10:55:26.934929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.390 [2024-11-19 10:55:26.940403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.390 [2024-11-19 10:55:26.940436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.390 [2024-11-19 10:55:26.940455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.390 [2024-11-19 10:55:26.945238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.390 [2024-11-19 10:55:26.945269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.390 [2024-11-19 10:55:26.945319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.390 [2024-11-19 10:55:26.949649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.390 [2024-11-19 10:55:26.949680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.390 [2024-11-19 10:55:26.949698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.390 [2024-11-19 10:55:26.954224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.390 [2024-11-19 10:55:26.954254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.391 [2024-11-19 10:55:26.954287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.391 [2024-11-19 10:55:26.958584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.391 [2024-11-19 10:55:26.958628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.391 [2024-11-19 10:55:26.958645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.391 [2024-11-19 10:55:26.963083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.391 [2024-11-19 10:55:26.963113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.391 [2024-11-19 10:55:26.963146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.391 [2024-11-19 10:55:26.967479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.391 [2024-11-19 10:55:26.967509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.391 [2024-11-19 10:55:26.967527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.391 [2024-11-19 10:55:26.971862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.391 [2024-11-19 10:55:26.971893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.391 [2024-11-19 10:55:26.971910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.391 [2024-11-19 10:55:26.976178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.391 [2024-11-19 10:55:26.976206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.391 [2024-11-19 10:55:26.976239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.391 [2024-11-19 10:55:26.980582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.391 [2024-11-19 10:55:26.980626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.391 [2024-11-19 10:55:26.980643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.391 [2024-11-19 10:55:26.985108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.391 [2024-11-19 10:55:26.985146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.391 [2024-11-19 10:55:26.985163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.391 [2024-11-19 10:55:26.989587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.391 [2024-11-19 10:55:26.989630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.391 [2024-11-19 10:55:26.989647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.391 [2024-11-19 10:55:26.994071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.391 [2024-11-19 10:55:26.994102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.391 [2024-11-19 10:55:26.994120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.391 [2024-11-19 10:55:26.998826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.391 [2024-11-19 10:55:26.998858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.391 [2024-11-19 10:55:26.998876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.391 [2024-11-19 10:55:27.004211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.391 [2024-11-19 10:55:27.004243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.391 [2024-11-19 10:55:27.004261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.391 [2024-11-19 10:55:27.009181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.391 [2024-11-19 10:55:27.009213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.391 [2024-11-19 10:55:27.009232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.651 [2024-11-19 10:55:27.014113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.651 [2024-11-19 10:55:27.014145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.651 [2024-11-19 10:55:27.014163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.651 [2024-11-19 10:55:27.018819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.651 [2024-11-19 10:55:27.018850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.651 [2024-11-19 10:55:27.018868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.651 [2024-11-19 10:55:27.023351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.651 [2024-11-19 10:55:27.023382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.651 [2024-11-19 10:55:27.023399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.651 [2024-11-19 10:55:27.028015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.651 [2024-11-19 10:55:27.028047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.651 [2024-11-19 10:55:27.028064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.651 [2024-11-19 10:55:27.032525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.651 [2024-11-19 10:55:27.032555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.651 [2024-11-19 10:55:27.032572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.651 [2024-11-19 10:55:27.036970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.651 [2024-11-19 10:55:27.037001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.651 [2024-11-19 10:55:27.037018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.651 [2024-11-19 10:55:27.042175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.651 [2024-11-19 10:55:27.042206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.651 [2024-11-19 10:55:27.042224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.651 [2024-11-19 10:55:27.047757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.651 [2024-11-19 10:55:27.047788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.651 [2024-11-19 10:55:27.047807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.651 [2024-11-19 10:55:27.054116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.651 [2024-11-19 10:55:27.054148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.651 [2024-11-19 10:55:27.054167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.651 [2024-11-19 10:55:27.059425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.651 [2024-11-19 10:55:27.059456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.651 [2024-11-19 10:55:27.059475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.651 [2024-11-19 10:55:27.065024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.651 [2024-11-19 10:55:27.065056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.651 [2024-11-19 10:55:27.065075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.651 [2024-11-19 10:55:27.070166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.651 [2024-11-19 10:55:27.070199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.651 [2024-11-19 10:55:27.070224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.651 [2024-11-19 10:55:27.075137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.651 [2024-11-19 10:55:27.075170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.651 [2024-11-19 10:55:27.075188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.651 [2024-11-19 10:55:27.080466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.651 [2024-11-19 10:55:27.080498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.651 [2024-11-19 10:55:27.080516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.651 [2024-11-19 10:55:27.086627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.651 [2024-11-19 10:55:27.086661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.651 [2024-11-19 10:55:27.086679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.651 [2024-11-19 10:55:27.092231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.651 [2024-11-19 10:55:27.092264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.651 [2024-11-19 10:55:27.092282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.651 [2024-11-19 10:55:27.095024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.651 [2024-11-19 10:55:27.095055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.651 [2024-11-19 10:55:27.095072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.651 [2024-11-19 10:55:27.099918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.651 [2024-11-19 10:55:27.099965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.651 [2024-11-19 10:55:27.099983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.652 [2024-11-19 10:55:27.104586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.652 [2024-11-19 10:55:27.104631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.652 [2024-11-19 10:55:27.104648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.652 [2024-11-19 10:55:27.109926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.652 [2024-11-19 10:55:27.109972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.652 [2024-11-19 10:55:27.109990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.652 [2024-11-19 10:55:27.115172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.652 [2024-11-19 10:55:27.115210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.652 [2024-11-19 10:55:27.115229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.652 [2024-11-19 10:55:27.120801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.652 [2024-11-19 10:55:27.120833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.652 [2024-11-19 10:55:27.120851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.652 [2024-11-19 10:55:27.125917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.652 [2024-11-19 10:55:27.125949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.652 [2024-11-19 10:55:27.125966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.652 [2024-11-19 10:55:27.130672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.652 [2024-11-19 10:55:27.130703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.652 [2024-11-19 10:55:27.130720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.652 [2024-11-19 10:55:27.135926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.652 [2024-11-19 10:55:27.135957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.652 [2024-11-19 10:55:27.135974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.652 [2024-11-19 10:55:27.140556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.652 [2024-11-19 10:55:27.140587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.652 [2024-11-19 10:55:27.140606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.652 [2024-11-19 10:55:27.143893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.652 [2024-11-19 10:55:27.143921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.652 [2024-11-19 10:55:27.143953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.652 [2024-11-19 10:55:27.148882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.652 [2024-11-19 10:55:27.148911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.652 [2024-11-19 10:55:27.148927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.652 [2024-11-19 10:55:27.153656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.652 [2024-11-19 10:55:27.153687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.652 [2024-11-19 10:55:27.153705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.652 [2024-11-19 10:55:27.158762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.652 [2024-11-19 10:55:27.158793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.652 [2024-11-19 10:55:27.158811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.652 [2024-11-19 10:55:27.166157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.652 [2024-11-19 10:55:27.166188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.652 [2024-11-19 10:55:27.166206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.652 [2024-11-19 10:55:27.173249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.652 [2024-11-19 10:55:27.173280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.652 [2024-11-19 10:55:27.173299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.652 [2024-11-19 10:55:27.180974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.652 [2024-11-19 10:55:27.181006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.652 [2024-11-19 10:55:27.181024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.652 [2024-11-19 10:55:27.188602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.652 [2024-11-19 10:55:27.188634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.652 [2024-11-19 10:55:27.188653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.652 [2024-11-19 10:55:27.196168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.652 [2024-11-19 10:55:27.196200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.652 [2024-11-19 10:55:27.196218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.652 [2024-11-19 10:55:27.204207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.652 [2024-11-19 10:55:27.204239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.652 [2024-11-19 10:55:27.204256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.652 [2024-11-19 10:55:27.212352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.652 [2024-11-19 10:55:27.212384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.652 [2024-11-19 10:55:27.212403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.652 [2024-11-19 10:55:27.220476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.652 [2024-11-19 10:55:27.220508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.652 [2024-11-19 10:55:27.220534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.652 [2024-11-19 10:55:27.228218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.652 [2024-11-19 10:55:27.228251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.652 [2024-11-19 10:55:27.228269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.652 [2024-11-19 10:55:27.235922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.653 [2024-11-19 10:55:27.235954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.653 [2024-11-19 10:55:27.235972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.653 [2024-11-19 10:55:27.243503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.653 [2024-11-19 10:55:27.243535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.653 [2024-11-19 10:55:27.243553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.653 [2024-11-19 10:55:27.251054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.653 [2024-11-19 10:55:27.251086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.653 [2024-11-19 10:55:27.251105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.653 [2024-11-19 10:55:27.258601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.653 [2024-11-19 10:55:27.258632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.653 [2024-11-19 10:55:27.258650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.653 [2024-11-19 10:55:27.266415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.653 [2024-11-19 10:55:27.266448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.653 [2024-11-19 10:55:27.266466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.912 [2024-11-19 10:55:27.274091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.912 [2024-11-19 10:55:27.274124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.912 [2024-11-19 10:55:27.274142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.912 [2024-11-19 10:55:27.281882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.912 [2024-11-19 10:55:27.281915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.912 [2024-11-19 10:55:27.281933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.912 [2024-11-19 10:55:27.289528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.912 [2024-11-19 10:55:27.289561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.912 [2024-11-19 10:55:27.289579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.912 [2024-11-19 10:55:27.297482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.912 [2024-11-19 10:55:27.297513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.912 [2024-11-19 10:55:27.297532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.912 [2024-11-19 10:55:27.305109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.912 [2024-11-19 10:55:27.305141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.912 [2024-11-19 10:55:27.305160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.912 [2024-11-19 10:55:27.312301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.912 [2024-11-19 10:55:27.312339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.912 [2024-11-19 10:55:27.312357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.912 [2024-11-19 10:55:27.317943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.912 [2024-11-19 10:55:27.317974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.912 [2024-11-19 10:55:27.317992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.912 [2024-11-19 10:55:27.323586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.912 [2024-11-19 10:55:27.323617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.912 [2024-11-19 10:55:27.323635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.912 [2024-11-19 10:55:27.330059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.912 [2024-11-19 10:55:27.330091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.912 [2024-11-19 10:55:27.330110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.912 [2024-11-19 10:55:27.335622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.912 [2024-11-19 10:55:27.335654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.912 [2024-11-19 10:55:27.335672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.912 [2024-11-19 10:55:27.340564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.912 [2024-11-19 10:55:27.340596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.912 [2024-11-19 10:55:27.340620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.912 [2024-11-19 10:55:27.343507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.912 [2024-11-19 10:55:27.343538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.912 [2024-11-19 10:55:27.343555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.912 [2024-11-19 10:55:27.347843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.912 [2024-11-19 10:55:27.347874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.913 [2024-11-19 10:55:27.347891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.913 [2024-11-19 10:55:27.353352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.913 [2024-11-19 10:55:27.353383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.913 [2024-11-19 10:55:27.353401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.913 [2024-11-19 10:55:27.358073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.913 [2024-11-19 10:55:27.358104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.913 [2024-11-19 10:55:27.358137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.913 [2024-11-19 10:55:27.363233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.913 [2024-11-19 10:55:27.363264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.913 [2024-11-19 10:55:27.363282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.913 [2024-11-19 10:55:27.368865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.913 [2024-11-19 10:55:27.368897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.913 [2024-11-19 10:55:27.368915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.913 [2024-11-19 10:55:27.375049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.913 [2024-11-19 10:55:27.375081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.913 [2024-11-19 10:55:27.375099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.913 [2024-11-19 10:55:27.381804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.913 [2024-11-19 10:55:27.381836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.913 [2024-11-19 10:55:27.381854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.913 [2024-11-19 10:55:27.389287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.913 [2024-11-19 10:55:27.389336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.913 [2024-11-19 10:55:27.389355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.913 [2024-11-19 10:55:27.396834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.913 [2024-11-19 10:55:27.396866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.913 [2024-11-19 10:55:27.396885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.913 [2024-11-19 10:55:27.404594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.913 [2024-11-19 10:55:27.404626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.913 [2024-11-19 10:55:27.404644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.913 [2024-11-19 10:55:27.411824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.913 [2024-11-19 10:55:27.411856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.913 [2024-11-19 10:55:27.411874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.913 [2024-11-19 10:55:27.417831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.913 [2024-11-19 10:55:27.417862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.913 [2024-11-19 10:55:27.417880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.913 [2024-11-19 10:55:27.423051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.913 [2024-11-19 10:55:27.423083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.913 [2024-11-19 10:55:27.423101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.913 [2024-11-19 10:55:27.428111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.913 [2024-11-19 10:55:27.428142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.913 [2024-11-19 10:55:27.428160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.913 [2024-11-19 10:55:27.433334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.913 [2024-11-19 10:55:27.433366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.913 [2024-11-19 10:55:27.433383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.913 [2024-11-19 10:55:27.438502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.913 [2024-11-19 10:55:27.438535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.913 [2024-11-19 10:55:27.438553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.913 [2024-11-19 10:55:27.442212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.913 [2024-11-19 10:55:27.442243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.913 [2024-11-19 10:55:27.442261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.913 [2024-11-19 10:55:27.445806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.913 [2024-11-19 10:55:27.445837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.913 [2024-11-19 10:55:27.445855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.913 [2024-11-19 10:55:27.449627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.913 [2024-11-19 10:55:27.449658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.913 [2024-11-19 10:55:27.449676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.913 [2024-11-19 10:55:27.452333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.913 [2024-11-19 10:55:27.452362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.913 [2024-11-19 10:55:27.452379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.913 [2024-11-19 10:55:27.455352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.913 [2024-11-19 10:55:27.455381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.913 [2024-11-19 10:55:27.455398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.913 [2024-11-19 10:55:27.458273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.913 [2024-11-19 10:55:27.458310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.913 [2024-11-19 10:55:27.458330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.913 [2024-11-19 10:55:27.461341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.913 [2024-11-19 10:55:27.461372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.914 [2024-11-19 10:55:27.461389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.914 [2024-11-19 10:55:27.464355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.914 [2024-11-19 10:55:27.464385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.914 [2024-11-19 10:55:27.464402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.914 [2024-11-19 10:55:27.468310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.914 [2024-11-19 10:55:27.468342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.914 [2024-11-19 10:55:27.468365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.914 [2024-11-19 10:55:27.471958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.914 [2024-11-19 10:55:27.471989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.914 [2024-11-19 10:55:27.472006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.914 [2024-11-19 10:55:27.474944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.914 [2024-11-19 10:55:27.474974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.914 [2024-11-19 10:55:27.474991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.914 [2024-11-19 10:55:27.479124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.914 [2024-11-19 10:55:27.479155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.914 [2024-11-19 10:55:27.479172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.914 [2024-11-19 10:55:27.484193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.914 [2024-11-19 10:55:27.484225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.914 [2024-11-19 10:55:27.484243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.914 [2024-11-19 10:55:27.489141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.914 [2024-11-19 10:55:27.489172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.914 [2024-11-19 10:55:27.489189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.914 [2024-11-19 10:55:27.493984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.914 [2024-11-19 10:55:27.494015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.914 [2024-11-19 10:55:27.494034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.914 [2024-11-19 10:55:27.498815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.914 [2024-11-19 10:55:27.498846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.914 [2024-11-19 10:55:27.498863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.914 [2024-11-19 10:55:27.503928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.914 [2024-11-19 10:55:27.503960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.914 [2024-11-19 10:55:27.503978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.914 [2024-11-19 10:55:27.508886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.914 [2024-11-19 10:55:27.508924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.914 [2024-11-19 10:55:27.508943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.914 [2024-11-19 10:55:27.513513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.914 [2024-11-19 10:55:27.513545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.914 [2024-11-19 10:55:27.513562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.914 [2024-11-19 10:55:27.518421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.914 [2024-11-19 10:55:27.518452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.914 [2024-11-19 10:55:27.518470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.914 [2024-11-19 10:55:27.523344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.914 [2024-11-19 10:55:27.523375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.914 [2024-11-19 10:55:27.523393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.914 [2024-11-19 10:55:27.527699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.914 [2024-11-19 10:55:27.527729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.914 [2024-11-19 10:55:27.527746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.914 [2024-11-19 10:55:27.532182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:39.914 [2024-11-19 10:55:27.532213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.914 [2024-11-19 10:55:27.532230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.173 [2024-11-19 10:55:27.536533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.173 [2024-11-19 10:55:27.536564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.173 [2024-11-19 10:55:27.536582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.173 [2024-11-19 10:55:27.541043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.173 [2024-11-19 10:55:27.541073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.173 [2024-11-19 10:55:27.541090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.173 [2024-11-19 10:55:27.545505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.173 [2024-11-19 10:55:27.545536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.173 [2024-11-19 10:55:27.545554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.173 [2024-11-19 10:55:27.550061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.173 [2024-11-19 10:55:27.550092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.173 [2024-11-19 10:55:27.550109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.173 [2024-11-19 10:55:27.554764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.173 [2024-11-19 10:55:27.554795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.173 [2024-11-19 10:55:27.554812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.173 [2024-11-19 10:55:27.559263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.173 [2024-11-19 10:55:27.559293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.173 [2024-11-19 10:55:27.559319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.173 [2024-11-19 10:55:27.563645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.173 [2024-11-19 10:55:27.563676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.173 [2024-11-19 10:55:27.563692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.173 [2024-11-19 10:55:27.568050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.173 [2024-11-19 10:55:27.568079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.173 [2024-11-19 10:55:27.568096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.173 [2024-11-19 10:55:27.572402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.173 [2024-11-19 10:55:27.572432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.173 [2024-11-19 10:55:27.572456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.173 [2024-11-19 10:55:27.576901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.173 [2024-11-19 10:55:27.576932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.173 [2024-11-19 10:55:27.576950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.173 [2024-11-19 10:55:27.581350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.173 [2024-11-19 10:55:27.581380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.173 [2024-11-19 10:55:27.581397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.173 [2024-11-19 10:55:27.585979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.173 [2024-11-19 10:55:27.586011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.173 [2024-11-19 10:55:27.586034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.173 [2024-11-19 10:55:27.590394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.173 [2024-11-19 10:55:27.590424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.173 [2024-11-19 10:55:27.590442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.173 [2024-11-19 10:55:27.594903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.173 [2024-11-19 10:55:27.594933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.173 [2024-11-19 10:55:27.594951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.173 [2024-11-19 10:55:27.599559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.173 [2024-11-19 10:55:27.599591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.173 [2024-11-19 10:55:27.599608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.173 [2024-11-19 10:55:27.604069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.173 [2024-11-19 10:55:27.604100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.173 [2024-11-19 10:55:27.604118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.173 [2024-11-19 10:55:27.609362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.173 [2024-11-19 10:55:27.609393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.173 [2024-11-19 10:55:27.609411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.173 [2024-11-19 10:55:27.615882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.173 [2024-11-19 10:55:27.615913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.173 [2024-11-19 10:55:27.615931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.173 [2024-11-19 10:55:27.623267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.173 [2024-11-19 10:55:27.623299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.173 [2024-11-19 10:55:27.623327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.173 [2024-11-19 10:55:27.628792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.173 [2024-11-19 10:55:27.628825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-11-19 10:55:27.628843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.174 [2024-11-19 10:55:27.634472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.174 [2024-11-19 10:55:27.634510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-11-19 10:55:27.634529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.174 [2024-11-19 10:55:27.639904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.174 [2024-11-19 10:55:27.639937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-11-19 10:55:27.639955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.174 [2024-11-19 10:55:27.645895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.174 [2024-11-19 10:55:27.645928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-11-19 10:55:27.645946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.174 [2024-11-19 10:55:27.650429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.174 [2024-11-19 10:55:27.650461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-11-19 10:55:27.650479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.174 [2024-11-19 10:55:27.654830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.174 [2024-11-19 10:55:27.654860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-11-19 10:55:27.654879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.174 [2024-11-19 10:55:27.659252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.174 [2024-11-19 10:55:27.659282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-11-19 10:55:27.659299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.174 [2024-11-19 10:55:27.663717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.174 [2024-11-19 10:55:27.663746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-11-19 10:55:27.663764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.174 [2024-11-19 10:55:27.668331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.174 [2024-11-19 10:55:27.668372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-11-19 10:55:27.668389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.174 [2024-11-19 10:55:27.674092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.174 [2024-11-19 10:55:27.674124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-11-19 10:55:27.674142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.174 [2024-11-19 10:55:27.678108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.174 [2024-11-19 10:55:27.678139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-11-19 10:55:27.678157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.174 [2024-11-19 10:55:27.681884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.174 [2024-11-19 10:55:27.681930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-11-19 10:55:27.681946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.174 [2024-11-19 10:55:27.686619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.174 [2024-11-19 10:55:27.686649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-11-19 10:55:27.686667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.174 [2024-11-19 10:55:27.691387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.174 [2024-11-19 10:55:27.691418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-11-19 10:55:27.691435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.174 [2024-11-19 10:55:27.696888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.174 [2024-11-19 10:55:27.696919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-11-19 10:55:27.696936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.174 [2024-11-19 10:55:27.704495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.174 [2024-11-19 10:55:27.704527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-11-19 10:55:27.704545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.174 [2024-11-19 10:55:27.710955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.174 [2024-11-19 10:55:27.710989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-11-19 10:55:27.711007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.174 [2024-11-19 10:55:27.717243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.174 [2024-11-19 10:55:27.717275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-11-19 10:55:27.717317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.174 [2024-11-19 10:55:27.722433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.174 [2024-11-19 10:55:27.722465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-11-19 10:55:27.722489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.174 [2024-11-19 10:55:27.727009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.174 [2024-11-19 10:55:27.727039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-11-19 10:55:27.727057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.174 5783.00 IOPS, 722.88 MiB/s [2024-11-19T09:55:27.797Z] [2024-11-19 10:55:27.732468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.174 [2024-11-19 10:55:27.732499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-11-19 10:55:27.732516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.174 [2024-11-19 10:55:27.736812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.174 [2024-11-19 10:55:27.736859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.174 [2024-11-19 10:55:27.736875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.174 [2024-11-19 10:55:27.741412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.175 [2024-11-19 10:55:27.741442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.175 [2024-11-19 10:55:27.741460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.175 [2024-11-19 10:55:27.745565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.175 [2024-11-19 10:55:27.745610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.175 [2024-11-19 10:55:27.745627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.175 [2024-11-19 10:55:27.750091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.175 [2024-11-19 10:55:27.750121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.175 [2024-11-19 10:55:27.750138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.175 [2024-11-19 10:55:27.755442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.175 [2024-11-19 10:55:27.755474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.175 [2024-11-19 10:55:27.755492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.175 [2024-11-19 10:55:27.762085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.175 [2024-11-19 10:55:27.762118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.175 [2024-11-19 10:55:27.762136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.175 [2024-11-19 10:55:27.769882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.175 [2024-11-19 10:55:27.769914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.175 [2024-11-19 10:55:27.769933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.175 [2024-11-19 10:55:27.775859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.175 [2024-11-19 10:55:27.775893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.175 [2024-11-19 10:55:27.775911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.175 [2024-11-19 10:55:27.781296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.175 [2024-11-19 10:55:27.781336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.175 [2024-11-19 10:55:27.781354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.175 [2024-11-19 10:55:27.787229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.175 [2024-11-19 10:55:27.787262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.175 [2024-11-19 10:55:27.787281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.434 [2024-11-19 10:55:27.794892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.434 [2024-11-19 10:55:27.794926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.434 [2024-11-19 10:55:27.794944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.434 [2024-11-19 10:55:27.802153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.434 [2024-11-19 10:55:27.802185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.434 [2024-11-19 10:55:27.802204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.434 [2024-11-19 10:55:27.809283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.434 [2024-11-19 10:55:27.809326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.434 [2024-11-19 10:55:27.809346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.434 [2024-11-19 10:55:27.815503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.434 [2024-11-19 10:55:27.815534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.434 [2024-11-19 10:55:27.815552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.434 [2024-11-19 10:55:27.820580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.434 [2024-11-19 10:55:27.820626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.434 [2024-11-19 10:55:27.820651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.434 [2024-11-19 10:55:27.824083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.434 [2024-11-19 10:55:27.824113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.434 [2024-11-19 10:55:27.824131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.434 [2024-11-19 10:55:27.826907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.434 [2024-11-19 10:55:27.826937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.434 [2024-11-19 10:55:27.826954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.434 [2024-11-19 10:55:27.830653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.434 [2024-11-19 10:55:27.830690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.434 [2024-11-19 10:55:27.830708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.434 [2024-11-19 10:55:27.835062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.434 [2024-11-19 10:55:27.835093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.434 [2024-11-19 10:55:27.835111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.434 [2024-11-19 10:55:27.838693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.434 [2024-11-19 10:55:27.838724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.434 [2024-11-19 10:55:27.838742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.434 [2024-11-19 10:55:27.842247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.434 [2024-11-19 10:55:27.842278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.434 [2024-11-19 10:55:27.842296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.434 [2024-11-19 10:55:27.846720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.434 [2024-11-19 10:55:27.846752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.434 [2024-11-19 10:55:27.846769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.434 [2024-11-19 10:55:27.851154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.434 [2024-11-19 10:55:27.851184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.434 [2024-11-19 10:55:27.851201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.434 [2024-11-19 10:55:27.855706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.434 [2024-11-19 10:55:27.855744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.434 [2024-11-19 10:55:27.855763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.434 [2024-11-19 10:55:27.860701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.434 [2024-11-19 10:55:27.860731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.434 [2024-11-19 10:55:27.860765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.434 [2024-11-19 10:55:27.865867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.434 [2024-11-19 10:55:27.865898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.434 [2024-11-19 10:55:27.865931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.434 [2024-11-19 10:55:27.871568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.434 [2024-11-19 10:55:27.871600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.434 [2024-11-19 10:55:27.871633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.434 [2024-11-19 10:55:27.877315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.435 [2024-11-19 10:55:27.877347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.435 [2024-11-19 10:55:27.877365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.435 [2024-11-19 10:55:27.883364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.435 [2024-11-19 10:55:27.883396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.435 [2024-11-19 10:55:27.883414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.435 [2024-11-19 10:55:27.889427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.435 [2024-11-19 10:55:27.889459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.435 [2024-11-19 10:55:27.889478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.435 [2024-11-19 10:55:27.894921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.435 [2024-11-19 10:55:27.894953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.435 [2024-11-19 10:55:27.894970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.435 [2024-11-19 10:55:27.900697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.435 [2024-11-19 10:55:27.900730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.435 [2024-11-19 10:55:27.900747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.435 [2024-11-19 10:55:27.906538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.435 [2024-11-19 10:55:27.906580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.435 [2024-11-19 10:55:27.906598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.435 [2024-11-19 10:55:27.912385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.435 [2024-11-19 10:55:27.912417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.435 [2024-11-19 10:55:27.912435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.435 [2024-11-19 10:55:27.918528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.435 [2024-11-19 10:55:27.918560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.435 [2024-11-19 10:55:27.918578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.435 [2024-11-19 10:55:27.925323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.435 [2024-11-19 10:55:27.925355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.435 [2024-11-19 10:55:27.925372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.435 [2024-11-19 10:55:27.931737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.435 [2024-11-19 10:55:27.931769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.435 [2024-11-19 10:55:27.931787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.435 [2024-11-19 10:55:27.936877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.435 [2024-11-19 10:55:27.936909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.435 [2024-11-19 10:55:27.936926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.435 [2024-11-19 10:55:27.941898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.435 [2024-11-19 10:55:27.941929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.435 [2024-11-19 10:55:27.941947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.435 [2024-11-19 10:55:27.947121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.435 [2024-11-19 10:55:27.947152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.435 [2024-11-19 10:55:27.947170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.435 [2024-11-19 10:55:27.952757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.435 [2024-11-19 10:55:27.952788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.435 [2024-11-19 10:55:27.952812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.435 [2024-11-19 10:55:27.958103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.435 [2024-11-19 10:55:27.958135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.435 [2024-11-19 10:55:27.958153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.435 [2024-11-19 10:55:27.963294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.435 [2024-11-19 10:55:27.963334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.435 [2024-11-19 10:55:27.963353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.435 [2024-11-19 10:55:27.968889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.435 [2024-11-19 10:55:27.968921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.435 [2024-11-19 10:55:27.968939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.435 [2024-11-19 10:55:27.974356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.435 [2024-11-19 10:55:27.974388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.435 [2024-11-19 10:55:27.974406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.435 [2024-11-19 10:55:27.980079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.435 [2024-11-19 10:55:27.980111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.435 [2024-11-19 10:55:27.980129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.435 [2024-11-19 10:55:27.985983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.435 [2024-11-19 10:55:27.986014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.435 [2024-11-19 10:55:27.986032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.435 [2024-11-19 10:55:27.991900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.435 [2024-11-19 10:55:27.991932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.435 [2024-11-19 10:55:27.991950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.435 [2024-11-19 10:55:27.997840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.435 [2024-11-19 10:55:27.997871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.435 [2024-11-19 10:55:27.997889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.435 [2024-11-19 10:55:28.003744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.436 [2024-11-19 10:55:28.003781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.436 [2024-11-19 10:55:28.003800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.436 [2024-11-19 10:55:28.010251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.436 [2024-11-19 10:55:28.010283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.436 [2024-11-19 10:55:28.010300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.436 [2024-11-19 10:55:28.015679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.436 [2024-11-19 10:55:28.015710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.436 [2024-11-19 10:55:28.015727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.436 [2024-11-19 10:55:28.020576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.436 [2024-11-19 10:55:28.020607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.436 [2024-11-19 10:55:28.020624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.436 [2024-11-19 10:55:28.025453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.436 [2024-11-19 10:55:28.025484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.436 [2024-11-19 10:55:28.025503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.436 [2024-11-19 10:55:28.030858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.436 [2024-11-19 10:55:28.030889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.436 [2024-11-19 10:55:28.030906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.436 [2024-11-19 10:55:28.038327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.436 [2024-11-19 10:55:28.038358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.436 [2024-11-19 10:55:28.038375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.436 [2024-11-19 10:55:28.045133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.436 [2024-11-19 10:55:28.045164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.436 [2024-11-19 10:55:28.045182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.436 [2024-11-19 10:55:28.052091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.436 [2024-11-19 10:55:28.052123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.436 [2024-11-19 10:55:28.052140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.695 [2024-11-19 10:55:28.059713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.695 [2024-11-19 10:55:28.059746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.695 [2024-11-19 10:55:28.059764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.695 [2024-11-19 10:55:28.066689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.695 [2024-11-19 10:55:28.066721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.695 [2024-11-19 10:55:28.066739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.695 [2024-11-19 10:55:28.074605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.695 [2024-11-19 10:55:28.074637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.695 [2024-11-19 10:55:28.074656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.695 [2024-11-19 10:55:28.079676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.695 [2024-11-19 10:55:28.079708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.695 [2024-11-19 10:55:28.079725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.695 [2024-11-19 10:55:28.085878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.695 [2024-11-19 10:55:28.085925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.695 [2024-11-19 10:55:28.085942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.695 [2024-11-19 10:55:28.091495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.695 [2024-11-19 10:55:28.091526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.695 [2024-11-19 10:55:28.091544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.695 [2024-11-19 10:55:28.096343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.695 [2024-11-19 10:55:28.096389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.695 [2024-11-19 10:55:28.096406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.695 [2024-11-19 10:55:28.100895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.695 [2024-11-19 10:55:28.100924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.695 [2024-11-19 10:55:28.100941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.695 [2024-11-19 10:55:28.105573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.695 [2024-11-19 10:55:28.105603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.695 [2024-11-19 10:55:28.105646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.695 [2024-11-19 10:55:28.110075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.695 [2024-11-19 10:55:28.110102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.695 [2024-11-19 10:55:28.110133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.695 [2024-11-19 10:55:28.114908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.695 [2024-11-19 10:55:28.114938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.695 [2024-11-19 10:55:28.114970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.695 [2024-11-19 10:55:28.120118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.695 [2024-11-19 10:55:28.120148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.695 [2024-11-19 10:55:28.120165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.695 [2024-11-19 10:55:28.125369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.695 [2024-11-19 10:55:28.125414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.695 [2024-11-19 10:55:28.125431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.695 [2024-11-19 10:55:28.130653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.695 [2024-11-19 10:55:28.130698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.695 [2024-11-19 10:55:28.130715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.695 [2024-11-19 10:55:28.135156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.695 [2024-11-19 10:55:28.135187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.695 [2024-11-19 10:55:28.135204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.695 [2024-11-19 10:55:28.140158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.695 [2024-11-19 10:55:28.140189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.695 [2024-11-19 10:55:28.140207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.695 [2024-11-19 10:55:28.145308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.695 [2024-11-19 10:55:28.145339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.695 [2024-11-19 10:55:28.145357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.695 [2024-11-19 10:55:28.150227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.696 [2024-11-19 10:55:28.150258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.696 [2024-11-19 10:55:28.150276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.696 [2024-11-19 10:55:28.153776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.696 [2024-11-19 10:55:28.153807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.696 [2024-11-19 10:55:28.153824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.696 [2024-11-19 10:55:28.157592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.696 [2024-11-19 10:55:28.157623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.696 [2024-11-19 10:55:28.157641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.696 [2024-11-19 10:55:28.162682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.696 [2024-11-19 10:55:28.162714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.696 [2024-11-19 10:55:28.162732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.696 [2024-11-19 10:55:28.168001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.696 [2024-11-19 10:55:28.168032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.696 [2024-11-19 10:55:28.168050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.696 [2024-11-19 10:55:28.173168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.696 [2024-11-19 10:55:28.173199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.696 [2024-11-19 10:55:28.173218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.696 [2024-11-19 10:55:28.178226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.696 [2024-11-19 10:55:28.178257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.696 [2024-11-19 10:55:28.178275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.696 [2024-11-19 10:55:28.183107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.696 [2024-11-19 10:55:28.183138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.696 [2024-11-19 10:55:28.183156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.696 [2024-11-19 10:55:28.187683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.696 [2024-11-19 10:55:28.187714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.696 [2024-11-19 10:55:28.187737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.696 [2024-11-19 10:55:28.192293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.696 [2024-11-19 10:55:28.192330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.696 [2024-11-19 10:55:28.192348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.696 [2024-11-19 10:55:28.196954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.696 [2024-11-19 10:55:28.196984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.696 [2024-11-19 10:55:28.197001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.696 [2024-11-19 10:55:28.201556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.696 [2024-11-19 10:55:28.201586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.696 [2024-11-19 10:55:28.201603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.696 [2024-11-19 10:55:28.206086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.696 [2024-11-19 10:55:28.206115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.696 [2024-11-19 10:55:28.206132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.696 [2024-11-19 10:55:28.210639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.696 [2024-11-19 10:55:28.210668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.696 [2024-11-19 10:55:28.210686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.696 [2024-11-19 10:55:28.215702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.696 [2024-11-19 10:55:28.215733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.696 [2024-11-19 10:55:28.215750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.696 [2024-11-19 10:55:28.221313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.696 [2024-11-19 10:55:28.221344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.696 [2024-11-19 10:55:28.221361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.696 [2024-11-19 10:55:28.228890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.696 [2024-11-19 10:55:28.228921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.696 [2024-11-19 10:55:28.228940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.696 [2024-11-19 10:55:28.234777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.696 [2024-11-19 10:55:28.234814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.696 [2024-11-19 10:55:28.234832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.696 [2024-11-19 10:55:28.240562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.696 [2024-11-19 10:55:28.240594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.696 [2024-11-19 10:55:28.240612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.696 [2024-11-19 10:55:28.245617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.696 [2024-11-19 10:55:28.245648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.696 [2024-11-19 10:55:28.245665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.696 [2024-11-19 10:55:28.251433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.696 [2024-11-19 10:55:28.251464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.696 [2024-11-19 10:55:28.251482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.696 [2024-11-19 10:55:28.255746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.696 [2024-11-19 10:55:28.255777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.696 [2024-11-19 10:55:28.255794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.696 [2024-11-19 10:55:28.263140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.697 [2024-11-19 10:55:28.263171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.697 [2024-11-19 10:55:28.263204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.697 [2024-11-19 10:55:28.268830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.697 [2024-11-19 10:55:28.268861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.697 [2024-11-19 10:55:28.268879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.697 [2024-11-19 10:55:28.274340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.697 [2024-11-19 10:55:28.274371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.697 [2024-11-19 10:55:28.274389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.697 [2024-11-19 10:55:28.278979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.697 [2024-11-19 10:55:28.279009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.697 [2024-11-19 10:55:28.279026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.697 [2024-11-19 10:55:28.283508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.697 [2024-11-19 10:55:28.283538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.697 [2024-11-19 10:55:28.283556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.697 [2024-11-19 10:55:28.287950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.697 [2024-11-19 10:55:28.287981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.697 [2024-11-19 10:55:28.288013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.697 [2024-11-19 10:55:28.292436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.697 [2024-11-19 10:55:28.292465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.697 [2024-11-19 10:55:28.292483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.697 [2024-11-19 10:55:28.296678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.697 [2024-11-19 10:55:28.296721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.697 [2024-11-19 10:55:28.296737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.697 [2024-11-19 10:55:28.301029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.697 [2024-11-19 10:55:28.301060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.697 [2024-11-19 10:55:28.301092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.697 [2024-11-19 10:55:28.305637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.697 [2024-11-19 10:55:28.305667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.697 [2024-11-19 10:55:28.305685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.697 [2024-11-19 10:55:28.309978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.697 [2024-11-19 10:55:28.310006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.697 [2024-11-19 10:55:28.310039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.697 [2024-11-19 10:55:28.314426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.697 [2024-11-19 10:55:28.314455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.697 [2024-11-19 10:55:28.314473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.956 [2024-11-19 10:55:28.318911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.956 [2024-11-19 10:55:28.318940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.956 [2024-11-19 10:55:28.318977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.956 [2024-11-19 10:55:28.323191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.956 [2024-11-19 10:55:28.323219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.956 [2024-11-19 10:55:28.323250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.956 [2024-11-19 10:55:28.327534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.956 [2024-11-19 10:55:28.327563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.956 [2024-11-19 10:55:28.327580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.956 [2024-11-19 10:55:28.332738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.956 [2024-11-19 10:55:28.332767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.956 [2024-11-19 10:55:28.332785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.956 [2024-11-19 10:55:28.336421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.956 [2024-11-19 10:55:28.336466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.956 [2024-11-19 10:55:28.336482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.956 [2024-11-19 10:55:28.341625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.957 [2024-11-19 10:55:28.341656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.957 [2024-11-19 10:55:28.341674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.957 [2024-11-19 10:55:28.346849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.957 [2024-11-19 10:55:28.346879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.957 [2024-11-19 10:55:28.346897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.957 [2024-11-19 10:55:28.351436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.957 [2024-11-19 10:55:28.351466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.957 [2024-11-19 10:55:28.351498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.957 [2024-11-19 10:55:28.355947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.957 [2024-11-19 10:55:28.355989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.957 [2024-11-19 10:55:28.356006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.957 [2024-11-19 10:55:28.360666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.957 [2024-11-19 10:55:28.360715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.957 [2024-11-19 10:55:28.360732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.957 [2024-11-19 10:55:28.365338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.957 [2024-11-19 10:55:28.365366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.957 [2024-11-19 10:55:28.365400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.957 [2024-11-19 10:55:28.370927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.957 [2024-11-19 10:55:28.370973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.957 [2024-11-19 10:55:28.370990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.957 [2024-11-19 10:55:28.377821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.957 [2024-11-19 10:55:28.377872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.957 [2024-11-19 10:55:28.377890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.957 [2024-11-19 10:55:28.384950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.957 [2024-11-19 10:55:28.384979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.957 [2024-11-19 10:55:28.384996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.957 [2024-11-19 10:55:28.390320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.957 [2024-11-19 10:55:28.390351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.957 [2024-11-19 10:55:28.390369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.957 [2024-11-19 10:55:28.395956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.957 [2024-11-19 10:55:28.396001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.957 [2024-11-19 10:55:28.396018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.957 [2024-11-19 10:55:28.400763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.957 [2024-11-19 10:55:28.400791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.957 [2024-11-19 10:55:28.400823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.957 [2024-11-19 10:55:28.405524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.957 [2024-11-19 10:55:28.405553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.957 [2024-11-19 10:55:28.405570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.957 [2024-11-19 10:55:28.410119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.957 [2024-11-19 10:55:28.410150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.957 [2024-11-19 10:55:28.410167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.957 [2024-11-19 10:55:28.414912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.957 [2024-11-19 10:55:28.414941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.957 [2024-11-19 10:55:28.414974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.957 [2024-11-19 10:55:28.420436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.957 [2024-11-19 10:55:28.420467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.957 [2024-11-19 10:55:28.420484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.957 [2024-11-19 10:55:28.425639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.957 [2024-11-19 10:55:28.425669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.957 [2024-11-19 10:55:28.425686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.957 [2024-11-19 10:55:28.432248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.957 [2024-11-19 10:55:28.432279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.957 [2024-11-19 10:55:28.432296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.957 [2024-11-19 10:55:28.439697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.957 [2024-11-19 10:55:28.439729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.957 [2024-11-19 10:55:28.439747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.957 [2024-11-19 10:55:28.445194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.957 [2024-11-19 10:55:28.445225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.957 [2024-11-19 10:55:28.445243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.957 [2024-11-19 10:55:28.450868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.957 [2024-11-19 10:55:28.450899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.957 [2024-11-19 10:55:28.450916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.957 [2024-11-19 10:55:28.455923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.957 [2024-11-19 10:55:28.455959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.957 [2024-11-19 10:55:28.455977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.957 [2024-11-19 10:55:28.461600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.958 [2024-11-19 10:55:28.461632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.958 [2024-11-19 10:55:28.461650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.958 [2024-11-19 10:55:28.468704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.958 [2024-11-19 10:55:28.468736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.958 [2024-11-19 10:55:28.468754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.958 [2024-11-19 10:55:28.474318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.958 [2024-11-19 10:55:28.474362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.958 [2024-11-19 10:55:28.474381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.958 [2024-11-19 10:55:28.479042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.958 [2024-11-19 10:55:28.479073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.958 [2024-11-19 10:55:28.479090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.958 [2024-11-19 10:55:28.483634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.958 [2024-11-19 10:55:28.483665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.958 [2024-11-19 10:55:28.483682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.958 [2024-11-19 10:55:28.487201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.958 [2024-11-19 10:55:28.487232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.958 [2024-11-19 10:55:28.487250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.958 [2024-11-19 10:55:28.491392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.958 [2024-11-19 10:55:28.491421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.958 [2024-11-19 10:55:28.491454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.958 [2024-11-19 10:55:28.497169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.958 [2024-11-19 10:55:28.497200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.958 [2024-11-19 10:55:28.497218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.958 [2024-11-19 10:55:28.502496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.958 [2024-11-19 10:55:28.502528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.958 [2024-11-19 10:55:28.502546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.958 [2024-11-19 10:55:28.507615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.958 [2024-11-19 10:55:28.507661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.958 [2024-11-19 10:55:28.507679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.958 [2024-11-19 10:55:28.512604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.958 [2024-11-19 10:55:28.512634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.958 [2024-11-19 10:55:28.512651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.958 [2024-11-19 10:55:28.517167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.958 [2024-11-19 10:55:28.517197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.958 [2024-11-19 10:55:28.517214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.958 [2024-11-19 10:55:28.521607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.958 [2024-11-19 10:55:28.521637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.958 [2024-11-19 10:55:28.521670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.958 [2024-11-19 10:55:28.526096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.958 [2024-11-19 10:55:28.526141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.958 [2024-11-19 10:55:28.526157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.958 [2024-11-19 10:55:28.531530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.958 [2024-11-19 10:55:28.531561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.958 [2024-11-19 10:55:28.531579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.958 [2024-11-19 10:55:28.538660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.958 [2024-11-19 10:55:28.538692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.958 [2024-11-19 10:55:28.538710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.958 [2024-11-19 10:55:28.545740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.958 [2024-11-19 10:55:28.545772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.958 [2024-11-19 10:55:28.545796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.958 [2024-11-19 10:55:28.552056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.958 [2024-11-19 10:55:28.552089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.958 [2024-11-19 10:55:28.552108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.958 [2024-11-19 10:55:28.559200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.958 [2024-11-19 10:55:28.559232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.958 [2024-11-19 10:55:28.559250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.958 [2024-11-19 10:55:28.565890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.958 [2024-11-19 10:55:28.565935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.958 [2024-11-19 10:55:28.565953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.958 [2024-11-19 10:55:28.572300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:40.958 [2024-11-19 10:55:28.572338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.958 [2024-11-19 10:55:28.572356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:41.218 [2024-11-19 10:55:28.578577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.218 [2024-11-19 10:55:28.578624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.218 [2024-11-19 10:55:28.578643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:41.218 [2024-11-19 10:55:28.583979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.218 [2024-11-19 10:55:28.584011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.218 [2024-11-19 10:55:28.584028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:41.218 [2024-11-19 10:55:28.588625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.218 [2024-11-19 10:55:28.588670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.218 [2024-11-19 10:55:28.588688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:41.218 [2024-11-19 10:55:28.593298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.218 [2024-11-19 10:55:28.593334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.218 [2024-11-19 10:55:28.593352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:41.218 [2024-11-19 10:55:28.597755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.218 [2024-11-19 10:55:28.597790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.218 [2024-11-19 10:55:28.597808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:41.218 [2024-11-19 10:55:28.602681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.218 [2024-11-19 10:55:28.602712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.218 [2024-11-19 10:55:28.602730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:41.218 [2024-11-19 10:55:28.608497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.218 [2024-11-19 10:55:28.608528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.218 [2024-11-19 10:55:28.608545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:41.218 [2024-11-19 10:55:28.615999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.218 [2024-11-19 10:55:28.616030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.218 [2024-11-19 10:55:28.616047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:41.218 [2024-11-19 10:55:28.621845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.218 [2024-11-19 10:55:28.621875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.218 [2024-11-19 10:55:28.621907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:41.218 [2024-11-19 10:55:28.627209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.218 [2024-11-19 10:55:28.627240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.218 [2024-11-19 10:55:28.627257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:41.218 [2024-11-19 10:55:28.632535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.218 [2024-11-19 10:55:28.632566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.218 [2024-11-19 10:55:28.632583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:41.218 [2024-11-19 10:55:28.637093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.218 [2024-11-19 10:55:28.637124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.218 [2024-11-19 10:55:28.637142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:41.218 [2024-11-19 10:55:28.641509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.218 [2024-11-19 10:55:28.641540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.218 [2024-11-19 10:55:28.641558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:41.218 [2024-11-19 10:55:28.644974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.218 [2024-11-19 10:55:28.645020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.218 [2024-11-19 10:55:28.645038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:41.218 [2024-11-19 10:55:28.649982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.218 [2024-11-19 10:55:28.650029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.218 [2024-11-19 10:55:28.650048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:41.218 [2024-11-19 10:55:28.654572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.218 [2024-11-19 10:55:28.654616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.218 [2024-11-19 10:55:28.654633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:41.218 [2024-11-19 10:55:28.659086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.219 [2024-11-19 10:55:28.659117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.219 [2024-11-19 10:55:28.659134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:41.219 [2024-11-19 10:55:28.663412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.219 [2024-11-19 10:55:28.663442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.219 [2024-11-19 10:55:28.663460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:41.219 [2024-11-19 10:55:28.668714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.219 [2024-11-19 10:55:28.668744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.219 [2024-11-19 10:55:28.668762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:41.219 [2024-11-19 10:55:28.675047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.219 [2024-11-19 10:55:28.675092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.219 [2024-11-19 10:55:28.675109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:41.219 [2024-11-19 10:55:28.682509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.219 [2024-11-19 10:55:28.682541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.219 [2024-11-19 10:55:28.682559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:41.219 [2024-11-19 10:55:28.688537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.219 [2024-11-19 10:55:28.688569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.219 [2024-11-19 10:55:28.688592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:41.219 [2024-11-19 10:55:28.696265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.219 [2024-11-19 10:55:28.696320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.219 [2024-11-19 10:55:28.696340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:41.219 [2024-11-19 10:55:28.703361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.219 [2024-11-19 10:55:28.703407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.219 [2024-11-19 10:55:28.703423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:41.219 [2024-11-19 10:55:28.710700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.219 [2024-11-19 10:55:28.710732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.219 [2024-11-19 10:55:28.710749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:41.219 [2024-11-19 10:55:28.718756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.219 [2024-11-19 10:55:28.718787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.219 [2024-11-19 10:55:28.718805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:41.219 [2024-11-19 10:55:28.726783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.219 [2024-11-19 10:55:28.726814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.219 [2024-11-19 10:55:28.726832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:41.219 5760.50 IOPS, 720.06 MiB/s [2024-11-19T09:55:28.842Z] [2024-11-19 10:55:28.735277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb232d0) 00:27:41.219 [2024-11-19 10:55:28.735315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.219 [2024-11-19 10:55:28.735336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:41.219 00:27:41.219 Latency(us) 00:27:41.219 [2024-11-19T09:55:28.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.219 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:41.219 nvme0n1 : 2.04 5642.84 705.35 0.00 0.00 2777.97 649.29 46215.02 00:27:41.219 [2024-11-19T09:55:28.842Z] =================================================================================================================== 00:27:41.219 [2024-11-19T09:55:28.842Z] Total : 5642.84 705.35 0.00 0.00 2777.97 649.29 46215.02 00:27:41.219 { 00:27:41.219 "results": [ 00:27:41.219 { 00:27:41.219 "job": "nvme0n1", 00:27:41.219 "core_mask": "0x2", 00:27:41.219 "workload": "randread", 00:27:41.219 "status": "finished", 00:27:41.219 "queue_depth": 16, 00:27:41.219 "io_size": 131072, 00:27:41.219 "runtime": 2.044539, 00:27:41.219 "iops": 5642.8368448828805, 00:27:41.219 "mibps": 705.3546056103601, 00:27:41.219 "io_failed": 0, 00:27:41.219 "io_timeout": 0, 00:27:41.219 "avg_latency_us": 2777.9693726143582, 00:27:41.219 "min_latency_us": 649.2918518518519, 00:27:41.219 "max_latency_us": 46215.01629629629 00:27:41.219 } 00:27:41.219 ], 00:27:41.219 "core_count": 1 00:27:41.219 } 00:27:41.219 10:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:41.219 10:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:41.219 10:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:41.219 | .driver_specific 00:27:41.219 | .nvme_error 00:27:41.219 | .status_code 00:27:41.219 | .command_transient_transport_error' 00:27:41.219 10:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:41.478 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 373 > 0 )) 00:27:41.478 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1449733 00:27:41.478 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1449733 ']' 00:27:41.478 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1449733 00:27:41.478 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:41.478 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:41.478 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1449733 00:27:41.739 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:41.739 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:41.739 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1449733' 00:27:41.739 killing process with pid 1449733 00:27:41.739 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1449733 00:27:41.739 Received shutdown signal, test time was about 2.000000 seconds 00:27:41.739 00:27:41.739 Latency(us) 00:27:41.739 [2024-11-19T09:55:29.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.739 [2024-11-19T09:55:29.362Z] =================================================================================================================== 00:27:41.739 [2024-11-19T09:55:29.362Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:41.739 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1449733 00:27:41.739 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:41.739 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:41.739 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:41.739 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:41.739 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:41.739 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1450230 00:27:41.739 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:41.739 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1450230 /var/tmp/bperf.sock 00:27:41.739 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1450230 ']' 00:27:41.739 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:41.739 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:41.739 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:41.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:41.739 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:41.739 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:42.034 [2024-11-19 10:55:29.364418] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:27:42.035 [2024-11-19 10:55:29.364506] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1450230 ] 00:27:42.035 [2024-11-19 10:55:29.430796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.035 [2024-11-19 10:55:29.490372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.035 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:42.035 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:42.035 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:42.035 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:42.323 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:42.323 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.323 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:42.323 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.323 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:42.323 10:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:42.889 nvme0n1 00:27:42.889 10:55:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:42.889 10:55:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.889 10:55:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:42.889 10:55:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.889 10:55:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:42.889 10:55:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:42.889 Running I/O for 2 seconds... 00:27:42.889 [2024-11-19 10:55:30.497370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166e01f8 00:27:42.889 [2024-11-19 10:55:30.498235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.889 [2024-11-19 10:55:30.498278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:43.148 [2024-11-19 10:55:30.511595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166e12d8 00:27:43.148 [2024-11-19 10:55:30.513044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.148 [2024-11-19 10:55:30.513083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:43.148 [2024-11-19 10:55:30.523036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166e99d8 00:27:43.148 [2024-11-19 10:55:30.524105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.148 [2024-11-19 10:55:30.524135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:43.148 [2024-11-19 10:55:30.535441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166f0bc0 00:27:43.148 [2024-11-19 10:55:30.536426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.148 [2024-11-19 10:55:30.536456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:43.148 [2024-11-19 10:55:30.547738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166fc998 00:27:43.148 [2024-11-19 10:55:30.548988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.148 [2024-11-19 10:55:30.549030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:43.148 [2024-11-19 10:55:30.560109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ef6a8 00:27:43.148 [2024-11-19 10:55:30.561466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.148 [2024-11-19 10:55:30.561495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:43.148 [2024-11-19 10:55:30.572161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ecc78 00:27:43.148 [2024-11-19 10:55:30.573787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.148 [2024-11-19 10:55:30.573829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:43.148 [2024-11-19 10:55:30.583807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166f6020 00:27:43.148 [2024-11-19 10:55:30.584895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.148 [2024-11-19 10:55:30.584939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:43.148 [2024-11-19 10:55:30.595134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166e6300 00:27:43.148 [2024-11-19 10:55:30.596060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.148 [2024-11-19 10:55:30.596088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:43.148 [2024-11-19 10:55:30.607408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166e0ea0 00:27:43.148 [2024-11-19 10:55:30.608418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.148 [2024-11-19 10:55:30.608446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:43.148 [2024-11-19 10:55:30.618588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166f5be8 00:27:43.148 [2024-11-19 10:55:30.620263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.148 [2024-11-19 10:55:30.620292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:43.148 [2024-11-19 10:55:30.628813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166e6fa8 00:27:43.148 [2024-11-19 10:55:30.629626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.148 [2024-11-19 10:55:30.629653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:43.148 [2024-11-19 10:55:30.641257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166eea00 00:27:43.148 [2024-11-19 10:55:30.642266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.148 [2024-11-19 10:55:30.642315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:43.148 [2024-11-19 10:55:30.655800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.148 [2024-11-19 10:55:30.656015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.148 [2024-11-19 10:55:30.656042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.148 [2024-11-19 10:55:30.669396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.148 [2024-11-19 10:55:30.669625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.148 [2024-11-19 10:55:30.669652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.148 [2024-11-19 10:55:30.683346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.148 [2024-11-19 10:55:30.683565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.148 [2024-11-19 10:55:30.683601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.148 [2024-11-19 10:55:30.696941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.148 [2024-11-19 10:55:30.697156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.148 [2024-11-19 10:55:30.697182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.148 [2024-11-19 10:55:30.710597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.148 [2024-11-19 10:55:30.710815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.148 [2024-11-19 10:55:30.710856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.148 [2024-11-19 10:55:30.724144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.148 [2024-11-19 10:55:30.724391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.148 [2024-11-19 10:55:30.724418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.148 [2024-11-19 10:55:30.737879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.148 [2024-11-19 10:55:30.738081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.148 [2024-11-19 10:55:30.738107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.148 [2024-11-19 10:55:30.751556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.148 [2024-11-19 10:55:30.751809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.148 [2024-11-19 10:55:30.751837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.148 [2024-11-19 10:55:30.765204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.148 [2024-11-19 10:55:30.765442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.149 [2024-11-19 10:55:30.765470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.408 [2024-11-19 10:55:30.779124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.408 [2024-11-19 10:55:30.779351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.408 [2024-11-19 10:55:30.779379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.408 [2024-11-19 10:55:30.792872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.408 [2024-11-19 10:55:30.793111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.408 [2024-11-19 10:55:30.793138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.408 [2024-11-19 10:55:30.806623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.408 [2024-11-19 10:55:30.806860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.408 [2024-11-19 10:55:30.806886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.408 [2024-11-19 10:55:30.820910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.408 [2024-11-19 10:55:30.821141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.408 [2024-11-19 10:55:30.821168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.408 [2024-11-19 10:55:30.834598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.408 [2024-11-19 10:55:30.834836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.408 [2024-11-19 10:55:30.834862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.408 [2024-11-19 10:55:30.848677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.408 [2024-11-19 10:55:30.848901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.408 [2024-11-19 10:55:30.848933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.408 [2024-11-19 10:55:30.862563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.408 [2024-11-19 10:55:30.862801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.408 [2024-11-19 10:55:30.862827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.408 [2024-11-19 10:55:30.876504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.408 [2024-11-19 10:55:30.876730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.408 [2024-11-19 10:55:30.876770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.408 [2024-11-19 10:55:30.890362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.408 [2024-11-19 10:55:30.890586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.408 [2024-11-19 10:55:30.890627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.408 [2024-11-19 10:55:30.904069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.408 [2024-11-19 10:55:30.904290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.408 [2024-11-19 10:55:30.904338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.408 [2024-11-19 10:55:30.917856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.408 [2024-11-19 10:55:30.918075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.408 [2024-11-19 10:55:30.918101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.408 [2024-11-19 10:55:30.931651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.408 [2024-11-19 10:55:30.931892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.408 [2024-11-19 10:55:30.931919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.408 [2024-11-19 10:55:30.945591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.408 [2024-11-19 10:55:30.945810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.408 [2024-11-19 10:55:30.945854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.408 [2024-11-19 10:55:30.959490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.408 [2024-11-19 10:55:30.959733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.408 [2024-11-19 10:55:30.959759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.408 [2024-11-19 10:55:30.973464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.408 [2024-11-19 10:55:30.973698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.408 [2024-11-19 10:55:30.973725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.408 [2024-11-19 10:55:30.987375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.408 [2024-11-19 10:55:30.987605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.408 [2024-11-19 10:55:30.987645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.408 [2024-11-19 10:55:31.001210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.408 [2024-11-19 10:55:31.001461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.408 [2024-11-19 10:55:31.001488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.408 [2024-11-19 10:55:31.014762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.408 [2024-11-19 10:55:31.014982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.408 [2024-11-19 10:55:31.015007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.408 [2024-11-19 10:55:31.028497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.408 [2024-11-19 10:55:31.028741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.408 [2024-11-19 10:55:31.028770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.667 [2024-11-19 10:55:31.042147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.667 [2024-11-19 10:55:31.042384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.667 [2024-11-19 10:55:31.042413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.667 [2024-11-19 10:55:31.055924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.667 [2024-11-19 10:55:31.056143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.667 [2024-11-19 10:55:31.056170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.667 [2024-11-19 10:55:31.069785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.668 [2024-11-19 10:55:31.070003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.668 [2024-11-19 10:55:31.070030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.668 [2024-11-19 10:55:31.083560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.668 [2024-11-19 10:55:31.083801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.668 [2024-11-19 10:55:31.083828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.668 [2024-11-19 10:55:31.097278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.668 [2024-11-19 10:55:31.097528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.668 [2024-11-19 10:55:31.097556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.668 [2024-11-19 10:55:31.111072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.668 [2024-11-19 10:55:31.111294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.668 [2024-11-19 10:55:31.111341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.668 [2024-11-19 10:55:31.124965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.668 [2024-11-19 10:55:31.125186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.668 [2024-11-19 10:55:31.125213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.668 [2024-11-19 10:55:31.138772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.668 [2024-11-19 10:55:31.139012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.668 [2024-11-19 10:55:31.139038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.668 [2024-11-19 10:55:31.152751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.668 [2024-11-19 10:55:31.152984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.668 [2024-11-19 10:55:31.153010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.668 [2024-11-19 10:55:31.166475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.668 [2024-11-19 10:55:31.166702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.668 [2024-11-19 10:55:31.166745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.668 [2024-11-19 10:55:31.180248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.668 [2024-11-19 10:55:31.180485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.668 [2024-11-19 10:55:31.180514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.668 [2024-11-19 10:55:31.194052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.668 [2024-11-19 10:55:31.194271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.668 [2024-11-19 10:55:31.194322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.668 [2024-11-19 10:55:31.208024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.668 [2024-11-19 10:55:31.208227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.668 [2024-11-19 10:55:31.208278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.668 [2024-11-19 10:55:31.221985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.668 [2024-11-19 10:55:31.222205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.668 [2024-11-19 10:55:31.222231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.668 [2024-11-19 10:55:31.235842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.668 [2024-11-19 10:55:31.236055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.668 [2024-11-19 10:55:31.236083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.668 [2024-11-19 10:55:31.249560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.668 [2024-11-19 10:55:31.249796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.668 [2024-11-19 10:55:31.249822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.668 [2024-11-19 10:55:31.263489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.668 [2024-11-19 10:55:31.263733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.668 [2024-11-19 10:55:31.263761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.668 [2024-11-19 10:55:31.277590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.668 [2024-11-19 10:55:31.277863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.668 [2024-11-19 10:55:31.277891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.928 [2024-11-19 10:55:31.291075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.928 [2024-11-19 10:55:31.291287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.928 [2024-11-19 10:55:31.291325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.928 [2024-11-19 10:55:31.304940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.928 [2024-11-19 10:55:31.305160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.928 [2024-11-19 10:55:31.305186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.928 [2024-11-19 10:55:31.318613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.928 [2024-11-19 10:55:31.318828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.928 [2024-11-19 10:55:31.318856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.928 [2024-11-19 10:55:31.332337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.928 [2024-11-19 10:55:31.332559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.928 [2024-11-19 10:55:31.332587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.928 [2024-11-19 10:55:31.346137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.928 [2024-11-19 10:55:31.346380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.928 [2024-11-19 10:55:31.346407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.928 [2024-11-19 10:55:31.359855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.928 [2024-11-19 10:55:31.360073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.928 [2024-11-19 10:55:31.360100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.928 [2024-11-19 10:55:31.373891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.928 [2024-11-19 10:55:31.374113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.928 [2024-11-19 10:55:31.374139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.928 [2024-11-19 10:55:31.387674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.928 [2024-11-19 10:55:31.387897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.928 [2024-11-19 10:55:31.387923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.928 [2024-11-19 10:55:31.401312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.928 [2024-11-19 10:55:31.401561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.928 [2024-11-19 10:55:31.401587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.928 [2024-11-19 10:55:31.415168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.928 [2024-11-19 10:55:31.415394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.928 [2024-11-19 10:55:31.415420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.928 [2024-11-19 10:55:31.428927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.928 [2024-11-19 10:55:31.429147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.928 [2024-11-19 10:55:31.429173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.928 [2024-11-19 10:55:31.442704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.928 [2024-11-19 10:55:31.442922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.928 [2024-11-19 10:55:31.442948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.928 [2024-11-19 10:55:31.456422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.928 [2024-11-19 10:55:31.456643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.928 [2024-11-19 10:55:31.456668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.928 [2024-11-19 10:55:31.470163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.928 [2024-11-19 10:55:31.470400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.928 [2024-11-19 10:55:31.470429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.928 18800.00 IOPS, 73.44 MiB/s [2024-11-19T09:55:31.551Z] [2024-11-19 10:55:31.483930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.928 [2024-11-19 10:55:31.484149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.928 [2024-11-19 10:55:31.484176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.928 [2024-11-19 10:55:31.497539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.928 [2024-11-19 10:55:31.497774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.928 [2024-11-19 10:55:31.497799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.928 [2024-11-19 10:55:31.511277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.928 [2024-11-19 10:55:31.511530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.928 [2024-11-19 10:55:31.511557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.928 [2024-11-19 10:55:31.524894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.928 [2024-11-19 10:55:31.525111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.928 [2024-11-19 10:55:31.525137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:43.928 [2024-11-19 10:55:31.538578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:43.928 [2024-11-19 10:55:31.538804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.928 [2024-11-19 10:55:31.538847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.188 [2024-11-19 10:55:31.551974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.188 [2024-11-19 10:55:31.552214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.188 [2024-11-19 10:55:31.552240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.188 [2024-11-19 10:55:31.565846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.188 [2024-11-19 10:55:31.566089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.188 [2024-11-19 10:55:31.566123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.188 [2024-11-19 10:55:31.579469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.188 [2024-11-19 10:55:31.579716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.188 [2024-11-19 10:55:31.579743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.188 [2024-11-19 10:55:31.593351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.188 [2024-11-19 10:55:31.593579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.188 [2024-11-19 10:55:31.593606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.188 [2024-11-19 10:55:31.606846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.188 [2024-11-19 10:55:31.607076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.188 [2024-11-19 10:55:31.607118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.188 [2024-11-19 10:55:31.620530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.188 [2024-11-19 10:55:31.620767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.188 [2024-11-19 10:55:31.620793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.188 [2024-11-19 10:55:31.634257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.188 [2024-11-19 10:55:31.634504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.188 [2024-11-19 10:55:31.634530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.188 [2024-11-19 10:55:31.648077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.188 [2024-11-19 10:55:31.648295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.188 [2024-11-19 10:55:31.648343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.188 [2024-11-19 10:55:31.661802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.188 [2024-11-19 10:55:31.662045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.188 [2024-11-19 10:55:31.662071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.188 [2024-11-19 10:55:31.675694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.188 [2024-11-19 10:55:31.675915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.188 [2024-11-19 10:55:31.675942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.188 [2024-11-19 10:55:31.689423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.188 [2024-11-19 10:55:31.689671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.188 [2024-11-19 10:55:31.689697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.188 [2024-11-19 10:55:31.703159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.188 [2024-11-19 10:55:31.703395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.188 [2024-11-19 10:55:31.703422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.188 [2024-11-19 10:55:31.717003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.188 [2024-11-19 10:55:31.717242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.188 [2024-11-19 10:55:31.717268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.188 [2024-11-19 10:55:31.730803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.188 [2024-11-19 10:55:31.731002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.188 [2024-11-19 10:55:31.731044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.188 [2024-11-19 10:55:31.744565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.188 [2024-11-19 10:55:31.744803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.188 [2024-11-19 10:55:31.744830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.188 [2024-11-19 10:55:31.758301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.188 [2024-11-19 10:55:31.758511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.188 [2024-11-19 10:55:31.758555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.188 [2024-11-19 10:55:31.771946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.188 [2024-11-19 10:55:31.772157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.188 [2024-11-19 10:55:31.772182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.188 [2024-11-19 10:55:31.785476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.188 [2024-11-19 10:55:31.785707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.188 [2024-11-19 10:55:31.785733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.188 [2024-11-19 10:55:31.799014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.188 [2024-11-19 10:55:31.799270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.188 [2024-11-19 10:55:31.799299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.448 [2024-11-19 10:55:31.812746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.448 [2024-11-19 10:55:31.812958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-19 10:55:31.812986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.448 [2024-11-19 10:55:31.826497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.448 [2024-11-19 10:55:31.826727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-19 10:55:31.826753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.448 [2024-11-19 10:55:31.839898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.448 [2024-11-19 10:55:31.840096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-19 10:55:31.840137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.448 [2024-11-19 10:55:31.853624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.448 [2024-11-19 10:55:31.853855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-19 10:55:31.853880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.448 [2024-11-19 10:55:31.867056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.448 [2024-11-19 10:55:31.867269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-19 10:55:31.867317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.448 [2024-11-19 10:55:31.880574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.448 [2024-11-19 10:55:31.880808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-19 10:55:31.880833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.448 [2024-11-19 10:55:31.893945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.448 [2024-11-19 10:55:31.894159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-19 10:55:31.894184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.448 [2024-11-19 10:55:31.907340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.448 [2024-11-19 10:55:31.907570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-19 10:55:31.907617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.448 [2024-11-19 10:55:31.920779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.448 [2024-11-19 10:55:31.920979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-19 10:55:31.921010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.448 [2024-11-19 10:55:31.934230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.448 [2024-11-19 10:55:31.934471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-19 10:55:31.934498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.448 [2024-11-19 10:55:31.947730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.448 [2024-11-19 10:55:31.947941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-19 10:55:31.947967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.448 [2024-11-19 10:55:31.961122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.448 [2024-11-19 10:55:31.961339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-19 10:55:31.961364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.448 [2024-11-19 10:55:31.974604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.448 [2024-11-19 10:55:31.974833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-19 10:55:31.974858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.448 [2024-11-19 10:55:31.987958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.448 [2024-11-19 10:55:31.988191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-19 10:55:31.988216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.448 [2024-11-19 10:55:32.001465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.448 [2024-11-19 10:55:32.001695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-19 10:55:32.001721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.448 [2024-11-19 10:55:32.014956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.448 [2024-11-19 10:55:32.015169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.448 [2024-11-19 10:55:32.015194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.448 [2024-11-19 10:55:32.028631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.448 [2024-11-19 10:55:32.028842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.449 [2024-11-19 10:55:32.028867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.449 [2024-11-19 10:55:32.041972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.449 [2024-11-19 10:55:32.042190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.449 [2024-11-19 10:55:32.042215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.449 [2024-11-19 10:55:32.055769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.449 [2024-11-19 10:55:32.056024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.449 [2024-11-19 10:55:32.056052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.708 [2024-11-19 10:55:32.069549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.708 [2024-11-19 10:55:32.069761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.708 [2024-11-19 10:55:32.069789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.708 [2024-11-19 10:55:32.083252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.708 [2024-11-19 10:55:32.083514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.708 [2024-11-19 10:55:32.083541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.708 [2024-11-19 10:55:32.096712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.708 [2024-11-19 10:55:32.096925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.708 [2024-11-19 10:55:32.096950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.708 [2024-11-19 10:55:32.110413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.708 [2024-11-19 10:55:32.110648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.708 [2024-11-19 10:55:32.110674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.708 [2024-11-19 10:55:32.123845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.708 [2024-11-19 10:55:32.124056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.708 [2024-11-19 10:55:32.124082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.708 [2024-11-19 10:55:32.137387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.708 [2024-11-19 10:55:32.137606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.708 [2024-11-19 10:55:32.137646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.708 [2024-11-19 10:55:32.151002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.708 [2024-11-19 10:55:32.151215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.708 [2024-11-19 10:55:32.151241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.708 [2024-11-19 10:55:32.164517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.708 [2024-11-19 10:55:32.164760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.708 [2024-11-19 10:55:32.164786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.708 [2024-11-19 10:55:32.177967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.708 [2024-11-19 10:55:32.178179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.708 [2024-11-19 10:55:32.178204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.708 [2024-11-19 10:55:32.191383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.708 [2024-11-19 10:55:32.191624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.708 [2024-11-19 10:55:32.191665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.708 [2024-11-19 10:55:32.204835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.708 [2024-11-19 10:55:32.205048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.708 [2024-11-19 10:55:32.205073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.708 [2024-11-19 10:55:32.218170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.708 [2024-11-19 10:55:32.218414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.708 [2024-11-19 10:55:32.218439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.708 [2024-11-19 10:55:32.231572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.708 [2024-11-19 10:55:32.231799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.708 [2024-11-19 10:55:32.231823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.708 [2024-11-19 10:55:32.244868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.708 [2024-11-19 10:55:32.245077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.709 [2024-11-19 10:55:32.245120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.709 [2024-11-19 10:55:32.258410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.709 [2024-11-19 10:55:32.258641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.709 [2024-11-19 10:55:32.258666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.709 [2024-11-19 10:55:32.271659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.709 [2024-11-19 10:55:32.271893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.709 [2024-11-19 10:55:32.271923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.709 [2024-11-19 10:55:32.285061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.709 [2024-11-19 10:55:32.285274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.709 [2024-11-19 10:55:32.285300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.709 [2024-11-19 10:55:32.298413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.709 [2024-11-19 10:55:32.298633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.709 [2024-11-19 10:55:32.298671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.709 [2024-11-19 10:55:32.312122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.709 [2024-11-19 10:55:32.312383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.709 [2024-11-19 10:55:32.312411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.709 [2024-11-19 10:55:32.325815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.709 [2024-11-19 10:55:32.326072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.709 [2024-11-19 10:55:32.326098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.967 [2024-11-19 10:55:32.339653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.968 [2024-11-19 10:55:32.339867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.968 [2024-11-19 10:55:32.339892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.968 [2024-11-19 10:55:32.353248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.968 [2024-11-19 10:55:32.353480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.968 [2024-11-19 10:55:32.353508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.968 [2024-11-19 10:55:32.366805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.968 [2024-11-19 10:55:32.367014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.968 [2024-11-19 10:55:32.367052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.968 [2024-11-19 10:55:32.380234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.968 [2024-11-19 10:55:32.380486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.968 [2024-11-19 10:55:32.380514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.968 [2024-11-19 10:55:32.393750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.968 [2024-11-19 10:55:32.393969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.968 [2024-11-19 10:55:32.393994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.968 [2024-11-19 10:55:32.407225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.968 [2024-11-19 10:55:32.407468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.968 [2024-11-19 10:55:32.407494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.968 [2024-11-19 10:55:32.420815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.968 [2024-11-19 10:55:32.421013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.968 [2024-11-19 10:55:32.421040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.968 [2024-11-19 10:55:32.434351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.968 [2024-11-19 10:55:32.434572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.968 [2024-11-19 10:55:32.434599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.968 [2024-11-19 10:55:32.447832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.968 [2024-11-19 10:55:32.448040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.968 [2024-11-19 10:55:32.448067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.968 [2024-11-19 10:55:32.461204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.968 [2024-11-19 10:55:32.461456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.968 [2024-11-19 10:55:32.461484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.968 [2024-11-19 10:55:32.474732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.968 [2024-11-19 10:55:32.474963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.968 [2024-11-19 10:55:32.474989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.968 18819.00 IOPS, 73.51 MiB/s [2024-11-19T09:55:32.591Z] [2024-11-19 10:55:32.488086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9220) with pdu=0x2000166ed0b0 00:27:44.968 [2024-11-19 10:55:32.488365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.968 [2024-11-19 10:55:32.488393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.968 00:27:44.968 Latency(us) 00:27:44.968 [2024-11-19T09:55:32.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.968 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:44.968 nvme0n1 : 2.01 18819.28 73.51 0.00 0.00 6786.54 2754.94 15534.46 00:27:44.968 [2024-11-19T09:55:32.591Z] =================================================================================================================== 00:27:44.968 [2024-11-19T09:55:32.591Z] Total : 18819.28 73.51 0.00 0.00 6786.54 2754.94 15534.46 00:27:44.968 { 00:27:44.968 "results": [ 00:27:44.968 { 00:27:44.968 "job": "nvme0n1", 00:27:44.968 "core_mask": "0x2", 00:27:44.968 "workload": "randwrite", 00:27:44.968 "status": "finished", 00:27:44.968 "queue_depth": 128, 00:27:44.968 "io_size": 4096, 00:27:44.968 "runtime": 2.006347, 00:27:44.968 "iops": 18819.277024363182, 00:27:44.968 "mibps": 73.51280087641868, 00:27:44.968 "io_failed": 0, 00:27:44.968 "io_timeout": 0, 00:27:44.968 "avg_latency_us": 6786.543090127577, 00:27:44.968 "min_latency_us": 2754.9392592592594, 00:27:44.968 "max_latency_us": 15534.45925925926 00:27:44.968 } 00:27:44.968 ], 00:27:44.968 "core_count": 1 00:27:44.968 } 00:27:44.968 10:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:44.968 10:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:44.968 10:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:44.968 10:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:44.968 | .driver_specific 00:27:44.968 | .nvme_error 00:27:44.968 | .status_code 00:27:44.968 | .command_transient_transport_error' 00:27:45.226 10:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 148 > 0 )) 00:27:45.226 10:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1450230 00:27:45.226 10:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1450230 ']' 00:27:45.226 10:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1450230 00:27:45.226 10:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:45.226 10:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:45.226 10:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1450230 00:27:45.226 10:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:45.226 10:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:45.226 10:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1450230' 00:27:45.226 killing process with pid 1450230 00:27:45.227 10:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1450230 00:27:45.227 Received shutdown signal, test time was about 2.000000 seconds 00:27:45.227 00:27:45.227 Latency(us) 00:27:45.227 [2024-11-19T09:55:32.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.227 [2024-11-19T09:55:32.850Z] =================================================================================================================== 00:27:45.227 [2024-11-19T09:55:32.850Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:45.227 10:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1450230 00:27:45.485 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:45.485 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:45.485 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:45.485 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:45.485 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:45.485 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1450646 00:27:45.485 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:45.485 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1450646 /var/tmp/bperf.sock 00:27:45.485 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1450646 ']' 00:27:45.485 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:45.485 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:45.485 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:45.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:45.485 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:45.485 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.485 [2024-11-19 10:55:33.054907] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:27:45.485 [2024-11-19 10:55:33.054992] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1450646 ] 00:27:45.485 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:45.485 Zero copy mechanism will not be used. 00:27:45.743 [2024-11-19 10:55:33.121437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.743 [2024-11-19 10:55:33.178684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.743 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:45.744 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:45.744 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:45.744 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:46.002 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:46.002 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.002 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:46.002 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.002 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:46.002 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:46.569 nvme0n1 00:27:46.569 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:46.569 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.569 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:46.569 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.569 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:46.569 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:46.569 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:46.569 Zero copy mechanism will not be used. 00:27:46.569 Running I/O for 2 seconds... 00:27:46.569 [2024-11-19 10:55:34.022076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.569 [2024-11-19 10:55:34.022186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.569 [2024-11-19 10:55:34.022226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:46.569 [2024-11-19 10:55:34.027975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.569 [2024-11-19 10:55:34.028076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.569 [2024-11-19 10:55:34.028107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:46.569 [2024-11-19 10:55:34.033275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.569 [2024-11-19 10:55:34.033378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.569 [2024-11-19 10:55:34.033408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:46.569 [2024-11-19 10:55:34.038613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.569 [2024-11-19 10:55:34.038703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.569 [2024-11-19 10:55:34.038735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:46.569 [2024-11-19 10:55:34.043827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.569 [2024-11-19 10:55:34.043906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.569 [2024-11-19 10:55:34.043934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:46.569 [2024-11-19 10:55:34.049050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.570 [2024-11-19 10:55:34.049137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.570 [2024-11-19 10:55:34.049165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:46.570 [2024-11-19 10:55:34.054107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.570 [2024-11-19 10:55:34.054196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.570 [2024-11-19 10:55:34.054224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:46.570 [2024-11-19 10:55:34.059501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.570 [2024-11-19 10:55:34.059586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.570 [2024-11-19 10:55:34.059614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:46.570 [2024-11-19 10:55:34.064710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.570 [2024-11-19 10:55:34.064795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.570 [2024-11-19 10:55:34.064831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:46.570 [2024-11-19 10:55:34.070019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.570 [2024-11-19 10:55:34.070105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.570 [2024-11-19 10:55:34.070133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:46.570 [2024-11-19 10:55:34.075300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.570 [2024-11-19 10:55:34.075388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.570 [2024-11-19 10:55:34.075416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:46.570 [2024-11-19 10:55:34.080507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.570 [2024-11-19 10:55:34.080593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.570 [2024-11-19 10:55:34.080621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:46.570 [2024-11-19 10:55:34.085700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.570 [2024-11-19 10:55:34.085784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.570 [2024-11-19 10:55:34.085812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:46.570 [2024-11-19 10:55:34.091441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.570 [2024-11-19 10:55:34.091513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.570 [2024-11-19 10:55:34.091541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:46.570 [2024-11-19 10:55:34.096720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.570 [2024-11-19 10:55:34.096794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.570 [2024-11-19 10:55:34.096821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:46.570 [2024-11-19 10:55:34.101930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.570 [2024-11-19 10:55:34.102018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.570 [2024-11-19 10:55:34.102045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:46.570 [2024-11-19 10:55:34.107073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.570 [2024-11-19 10:55:34.107142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.570 [2024-11-19 10:55:34.107170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:46.570 [2024-11-19 10:55:34.112318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.570 [2024-11-19 10:55:34.112405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.570 [2024-11-19 10:55:34.112433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:46.570 [2024-11-19 10:55:34.117708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.570 [2024-11-19 10:55:34.117784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.570 [2024-11-19 10:55:34.117811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:46.570 [2024-11-19 10:55:34.123352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.570 [2024-11-19 10:55:34.123427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.570 [2024-11-19 10:55:34.123455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:46.570 [2024-11-19 10:55:34.128894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.570 [2024-11-19 10:55:34.128967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.570 [2024-11-19 10:55:34.128994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:46.570 [2024-11-19 10:55:34.133966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.570 [2024-11-19 10:55:34.134040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.570 [2024-11-19 10:55:34.134067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:46.570 [2024-11-19 10:55:34.139079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.570 [2024-11-19 10:55:34.139153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.570 [2024-11-19 10:55:34.139181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:46.570 [2024-11-19 10:55:34.144203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.570 [2024-11-19 10:55:34.144281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.570 [2024-11-19 10:55:34.144316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:46.570 [2024-11-19 10:55:34.149354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.570 [2024-11-19 10:55:34.149434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.571 [2024-11-19 10:55:34.149461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:46.571 [2024-11-19 10:55:34.154450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.571 [2024-11-19 10:55:34.154523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.571 [2024-11-19 10:55:34.154550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:46.571 [2024-11-19 10:55:34.159621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.571 [2024-11-19 10:55:34.159701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.571 [2024-11-19 10:55:34.159729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:46.571 [2024-11-19 10:55:34.164841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.571 [2024-11-19 10:55:34.164911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.571 [2024-11-19 10:55:34.164938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:46.571 [2024-11-19 10:55:34.169941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.571 [2024-11-19 10:55:34.170013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.571 [2024-11-19 10:55:34.170040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:46.571 [2024-11-19 10:55:34.175021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.571 [2024-11-19 10:55:34.175105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.571 [2024-11-19 10:55:34.175132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:46.571 [2024-11-19 10:55:34.180131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.571 [2024-11-19 10:55:34.180202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.571 [2024-11-19 10:55:34.180229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:46.571 [2024-11-19 10:55:34.185271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.571 [2024-11-19 10:55:34.185367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.571 [2024-11-19 10:55:34.185396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:46.831 [2024-11-19 10:55:34.190351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.831 [2024-11-19 10:55:34.190435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.831 [2024-11-19 10:55:34.190462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:46.831 [2024-11-19 10:55:34.195318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.831 [2024-11-19 10:55:34.195391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.831 [2024-11-19 10:55:34.195418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:46.831 [2024-11-19 10:55:34.200568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.831 [2024-11-19 10:55:34.200656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.831 [2024-11-19 10:55:34.200690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:46.831 [2024-11-19 10:55:34.205826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.831 [2024-11-19 10:55:34.205897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.831 [2024-11-19 10:55:34.205924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:46.831 [2024-11-19 10:55:34.210881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.831 [2024-11-19 10:55:34.210963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.831 [2024-11-19 10:55:34.210990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:46.831 [2024-11-19 10:55:34.216034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.831 [2024-11-19 10:55:34.216117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.831 [2024-11-19 10:55:34.216144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:46.831 [2024-11-19 10:55:34.221316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.831 [2024-11-19 10:55:34.221395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.831 [2024-11-19 10:55:34.221422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:46.831 [2024-11-19 10:55:34.226375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.831 [2024-11-19 10:55:34.226455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.831 [2024-11-19 10:55:34.226482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:46.831 [2024-11-19 10:55:34.232093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.831 [2024-11-19 10:55:34.232167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.831 [2024-11-19 10:55:34.232194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:46.831 [2024-11-19 10:55:34.237416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.831 [2024-11-19 10:55:34.237506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.831 [2024-11-19 10:55:34.237533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:46.831 [2024-11-19 10:55:34.242453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.831 [2024-11-19 10:55:34.242535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.831 [2024-11-19 10:55:34.242562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:46.831 [2024-11-19 10:55:34.247459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.831 [2024-11-19 10:55:34.247538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.831 [2024-11-19 10:55:34.247565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:46.831 [2024-11-19 10:55:34.252449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.831 [2024-11-19 10:55:34.252548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.831 [2024-11-19 10:55:34.252577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:46.831 [2024-11-19 10:55:34.257592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.831 [2024-11-19 10:55:34.257675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.831 [2024-11-19 10:55:34.257702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:46.831 [2024-11-19 10:55:34.262978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.831 [2024-11-19 10:55:34.263049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.831 [2024-11-19 10:55:34.263076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:46.832 [2024-11-19 10:55:34.268722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.832 [2024-11-19 10:55:34.268804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.832 [2024-11-19 10:55:34.268831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:46.832 [2024-11-19 10:55:34.273750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.832 [2024-11-19 10:55:34.273831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.832 [2024-11-19 10:55:34.273859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:46.832 [2024-11-19 10:55:34.278695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.832 [2024-11-19 10:55:34.278766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.832 [2024-11-19 10:55:34.278793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:46.832 [2024-11-19 10:55:34.283665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.832 [2024-11-19 10:55:34.283749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.832 [2024-11-19 10:55:34.283776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:46.832 [2024-11-19 10:55:34.288851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.832 [2024-11-19 10:55:34.288932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.832 [2024-11-19 10:55:34.288959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:46.832 [2024-11-19 10:55:34.293738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.832 [2024-11-19 10:55:34.293817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.832 [2024-11-19 10:55:34.293845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:46.832 [2024-11-19 10:55:34.298927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.832 [2024-11-19 10:55:34.299053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.832 [2024-11-19 10:55:34.299081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:46.832 [2024-11-19 10:55:34.304785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.832 [2024-11-19 10:55:34.304915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.832 [2024-11-19 10:55:34.304943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:46.832 [2024-11-19 10:55:34.311172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.832 [2024-11-19 10:55:34.311396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.832 [2024-11-19 10:55:34.311425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:46.832 [2024-11-19 10:55:34.317730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.832 [2024-11-19 10:55:34.317924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.832 [2024-11-19 10:55:34.317953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:46.832 [2024-11-19 10:55:34.325380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.832 [2024-11-19 10:55:34.325554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.832 [2024-11-19 10:55:34.325583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:46.832 [2024-11-19 10:55:34.332568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.832 [2024-11-19 10:55:34.332758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.832 [2024-11-19 10:55:34.332787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:46.832 [2024-11-19 10:55:34.339705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.832 [2024-11-19 10:55:34.339904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.832 [2024-11-19 10:55:34.339933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:46.832 [2024-11-19 10:55:34.346868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.832 [2024-11-19 10:55:34.347014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.832 [2024-11-19 10:55:34.347048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:46.832 [2024-11-19 10:55:34.354433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.832 [2024-11-19 10:55:34.354651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.832 [2024-11-19 10:55:34.354681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:46.832 [2024-11-19 10:55:34.361893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.832 [2024-11-19 10:55:34.362061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.832 [2024-11-19 10:55:34.362090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:46.832 [2024-11-19 10:55:34.369218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.832 [2024-11-19 10:55:34.369406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.832 [2024-11-19 10:55:34.369435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:46.832 [2024-11-19 10:55:34.375581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.832 [2024-11-19 10:55:34.375787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.832 [2024-11-19 10:55:34.375816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:46.832 [2024-11-19 10:55:34.382018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.832 [2024-11-19 10:55:34.382223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.832 [2024-11-19 10:55:34.382252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:46.832 [2024-11-19 10:55:34.388386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.833 [2024-11-19 10:55:34.388563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.833 [2024-11-19 10:55:34.388593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:46.833 [2024-11-19 10:55:34.394881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.833 [2024-11-19 10:55:34.395077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.833 [2024-11-19 10:55:34.395106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:46.833 [2024-11-19 10:55:34.401910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.833 [2024-11-19 10:55:34.401983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.833 [2024-11-19 10:55:34.402011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:46.833 [2024-11-19 10:55:34.408050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.833 [2024-11-19 10:55:34.408129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.833 [2024-11-19 10:55:34.408157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:46.833 [2024-11-19 10:55:34.414082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.833 [2024-11-19 10:55:34.414153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.833 [2024-11-19 10:55:34.414180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:46.833 [2024-11-19 10:55:34.418989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.833 [2024-11-19 10:55:34.419066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.833 [2024-11-19 10:55:34.419094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:46.833 [2024-11-19 10:55:34.424036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.833 [2024-11-19 10:55:34.424110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.833 [2024-11-19 10:55:34.424136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:46.833 [2024-11-19 10:55:34.429041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.833 [2024-11-19 10:55:34.429195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.833 [2024-11-19 10:55:34.429224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:46.833 [2024-11-19 10:55:34.434523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.833 [2024-11-19 10:55:34.434683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.833 [2024-11-19 10:55:34.434713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:46.833 [2024-11-19 10:55:34.440951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.833 [2024-11-19 10:55:34.441137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.833 [2024-11-19 10:55:34.441166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:46.833 [2024-11-19 10:55:34.447402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:46.833 [2024-11-19 10:55:34.447505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.833 [2024-11-19 10:55:34.447533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.094 [2024-11-19 10:55:34.453266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.094 [2024-11-19 10:55:34.453357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.094 [2024-11-19 10:55:34.453385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.094 [2024-11-19 10:55:34.458640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.094 [2024-11-19 10:55:34.458822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.094 [2024-11-19 10:55:34.458852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.094 [2024-11-19 10:55:34.464892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.094 [2024-11-19 10:55:34.465084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.094 [2024-11-19 10:55:34.465113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.094 [2024-11-19 10:55:34.471060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.094 [2024-11-19 10:55:34.471258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.094 [2024-11-19 10:55:34.471287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.094 [2024-11-19 10:55:34.477394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.094 [2024-11-19 10:55:34.477577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.094 [2024-11-19 10:55:34.477606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.094 [2024-11-19 10:55:34.483896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.094 [2024-11-19 10:55:34.483997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.094 [2024-11-19 10:55:34.484026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.094 [2024-11-19 10:55:34.490444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.094 [2024-11-19 10:55:34.490574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.094 [2024-11-19 10:55:34.490603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.094 [2024-11-19 10:55:34.497083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.094 [2024-11-19 10:55:34.497187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.094 [2024-11-19 10:55:34.497215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.094 [2024-11-19 10:55:34.503527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.094 [2024-11-19 10:55:34.503725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.094 [2024-11-19 10:55:34.503754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.094 [2024-11-19 10:55:34.509727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.094 [2024-11-19 10:55:34.509922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.094 [2024-11-19 10:55:34.509956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.094 [2024-11-19 10:55:34.515986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.094 [2024-11-19 10:55:34.516176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.094 [2024-11-19 10:55:34.516205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.094 [2024-11-19 10:55:34.522401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.094 [2024-11-19 10:55:34.522586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.094 [2024-11-19 10:55:34.522615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.094 [2024-11-19 10:55:34.528736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.094 [2024-11-19 10:55:34.528841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.094 [2024-11-19 10:55:34.528868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.094 [2024-11-19 10:55:34.534667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.095 [2024-11-19 10:55:34.534784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.095 [2024-11-19 10:55:34.534814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.095 [2024-11-19 10:55:34.539630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.095 [2024-11-19 10:55:34.539715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.095 [2024-11-19 10:55:34.539742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.095 [2024-11-19 10:55:34.544890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.095 [2024-11-19 10:55:34.545010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.095 [2024-11-19 10:55:34.545038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.095 [2024-11-19 10:55:34.549915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.095 [2024-11-19 10:55:34.550021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.095 [2024-11-19 10:55:34.550049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.095 [2024-11-19 10:55:34.555107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.095 [2024-11-19 10:55:34.555212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.095 [2024-11-19 10:55:34.555240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.095 [2024-11-19 10:55:34.560202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.095 [2024-11-19 10:55:34.560317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.095 [2024-11-19 10:55:34.560346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.095 [2024-11-19 10:55:34.565293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.095 [2024-11-19 10:55:34.565431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.095 [2024-11-19 10:55:34.565460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.095 [2024-11-19 10:55:34.570495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.095 [2024-11-19 10:55:34.570576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.095 [2024-11-19 10:55:34.570603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.095 [2024-11-19 10:55:34.575446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.095 [2024-11-19 10:55:34.575534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.095 [2024-11-19 10:55:34.575561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.095 [2024-11-19 10:55:34.581373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.095 [2024-11-19 10:55:34.581545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.095 [2024-11-19 10:55:34.581573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.095 [2024-11-19 10:55:34.587666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.095 [2024-11-19 10:55:34.587814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.095 [2024-11-19 10:55:34.587843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.095 [2024-11-19 10:55:34.594300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.095 [2024-11-19 10:55:34.594437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.095 [2024-11-19 10:55:34.594466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.095 [2024-11-19 10:55:34.601221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.095 [2024-11-19 10:55:34.601422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.095 [2024-11-19 10:55:34.601452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.095 [2024-11-19 10:55:34.607830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.095 [2024-11-19 10:55:34.608029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.095 [2024-11-19 10:55:34.608058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.095 [2024-11-19 10:55:34.613812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.095 [2024-11-19 10:55:34.613952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.095 [2024-11-19 10:55:34.613981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.095 [2024-11-19 10:55:34.620585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.095 [2024-11-19 10:55:34.620736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.095 [2024-11-19 10:55:34.620765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.096 [2024-11-19 10:55:34.626934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.096 [2024-11-19 10:55:34.627063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.096 [2024-11-19 10:55:34.627092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.096 [2024-11-19 10:55:34.632374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.096 [2024-11-19 10:55:34.632443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.096 [2024-11-19 10:55:34.632470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.096 [2024-11-19 10:55:34.637718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.096 [2024-11-19 10:55:34.637838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.096 [2024-11-19 10:55:34.637868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.096 [2024-11-19 10:55:34.643359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.096 [2024-11-19 10:55:34.643433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.096 [2024-11-19 10:55:34.643462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.096 [2024-11-19 10:55:34.649073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.096 [2024-11-19 10:55:34.649204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.096 [2024-11-19 10:55:34.649233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.096 [2024-11-19 10:55:34.654664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.096 [2024-11-19 10:55:34.654796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.096 [2024-11-19 10:55:34.654825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.096 [2024-11-19 10:55:34.660134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.096 [2024-11-19 10:55:34.660223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.096 [2024-11-19 10:55:34.660259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.096 [2024-11-19 10:55:34.665624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.096 [2024-11-19 10:55:34.665787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.096 [2024-11-19 10:55:34.665817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.096 [2024-11-19 10:55:34.671275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.096 [2024-11-19 10:55:34.671374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.096 [2024-11-19 10:55:34.671403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.096 [2024-11-19 10:55:34.676768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.096 [2024-11-19 10:55:34.676875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.096 [2024-11-19 10:55:34.676905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.096 [2024-11-19 10:55:34.682103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.096 [2024-11-19 10:55:34.682260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.096 [2024-11-19 10:55:34.682289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.096 [2024-11-19 10:55:34.688111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.096 [2024-11-19 10:55:34.688281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.096 [2024-11-19 10:55:34.688317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.096 [2024-11-19 10:55:34.694411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.096 [2024-11-19 10:55:34.694539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.096 [2024-11-19 10:55:34.694568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.096 [2024-11-19 10:55:34.701461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.096 [2024-11-19 10:55:34.701674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.096 [2024-11-19 10:55:34.701708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.096 [2024-11-19 10:55:34.707261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.096 [2024-11-19 10:55:34.707373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.096 [2024-11-19 10:55:34.707401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.096 [2024-11-19 10:55:34.712806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.096 [2024-11-19 10:55:34.712917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.097 [2024-11-19 10:55:34.712945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.357 [2024-11-19 10:55:34.718619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.357 [2024-11-19 10:55:34.718770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.357 [2024-11-19 10:55:34.718799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.357 [2024-11-19 10:55:34.724380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.357 [2024-11-19 10:55:34.724516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.357 [2024-11-19 10:55:34.724545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.357 [2024-11-19 10:55:34.729815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.357 [2024-11-19 10:55:34.729901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.357 [2024-11-19 10:55:34.729930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.357 [2024-11-19 10:55:34.735581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.357 [2024-11-19 10:55:34.735734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.357 [2024-11-19 10:55:34.735764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.358 [2024-11-19 10:55:34.741909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.358 [2024-11-19 10:55:34.742059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.358 [2024-11-19 10:55:34.742088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.358 [2024-11-19 10:55:34.747729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.358 [2024-11-19 10:55:34.747902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.358 [2024-11-19 10:55:34.747931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.358 [2024-11-19 10:55:34.754178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.358 [2024-11-19 10:55:34.754379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.358 [2024-11-19 10:55:34.754408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.358 [2024-11-19 10:55:34.759946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.358 [2024-11-19 10:55:34.760063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.358 [2024-11-19 10:55:34.760092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.358 [2024-11-19 10:55:34.765254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.358 [2024-11-19 10:55:34.765344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.358 [2024-11-19 10:55:34.765371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.358 [2024-11-19 10:55:34.770658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.358 [2024-11-19 10:55:34.770746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.358 [2024-11-19 10:55:34.770774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.358 [2024-11-19 10:55:34.775750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.358 [2024-11-19 10:55:34.775835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.358 [2024-11-19 10:55:34.775864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.358 [2024-11-19 10:55:34.781443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.358 [2024-11-19 10:55:34.781564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.358 [2024-11-19 10:55:34.781592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.358 [2024-11-19 10:55:34.788295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.358 [2024-11-19 10:55:34.788499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.358 [2024-11-19 10:55:34.788529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.358 [2024-11-19 10:55:34.794403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.358 [2024-11-19 10:55:34.794473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.358 [2024-11-19 10:55:34.794500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.358 [2024-11-19 10:55:34.800771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.358 [2024-11-19 10:55:34.800845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.358 [2024-11-19 10:55:34.800872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.358 [2024-11-19 10:55:34.806432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.358 [2024-11-19 10:55:34.806505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.358 [2024-11-19 10:55:34.806532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.358 [2024-11-19 10:55:34.812049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.358 [2024-11-19 10:55:34.812122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.358 [2024-11-19 10:55:34.812156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.358 [2024-11-19 10:55:34.817116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.358 [2024-11-19 10:55:34.817206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.358 [2024-11-19 10:55:34.817234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.358 [2024-11-19 10:55:34.821865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.358 [2024-11-19 10:55:34.821967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.358 [2024-11-19 10:55:34.821996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.358 [2024-11-19 10:55:34.827550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.358 [2024-11-19 10:55:34.827713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.358 [2024-11-19 10:55:34.827741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.358 [2024-11-19 10:55:34.833384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.358 [2024-11-19 10:55:34.833580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.358 [2024-11-19 10:55:34.833608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.358 [2024-11-19 10:55:34.839166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.358 [2024-11-19 10:55:34.839356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.358 [2024-11-19 10:55:34.839385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.358 [2024-11-19 10:55:34.845889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.358 [2024-11-19 10:55:34.846061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.358 [2024-11-19 10:55:34.846090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.358 [2024-11-19 10:55:34.851475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.359 [2024-11-19 10:55:34.851566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.359 [2024-11-19 10:55:34.851595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.359 [2024-11-19 10:55:34.856236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.359 [2024-11-19 10:55:34.856330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.359 [2024-11-19 10:55:34.856359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.359 [2024-11-19 10:55:34.860845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.359 [2024-11-19 10:55:34.860935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.359 [2024-11-19 10:55:34.860963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.359 [2024-11-19 10:55:34.866043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.359 [2024-11-19 10:55:34.866149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.359 [2024-11-19 10:55:34.866177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.359 [2024-11-19 10:55:34.872040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.359 [2024-11-19 10:55:34.872227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.359 [2024-11-19 10:55:34.872255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.359 [2024-11-19 10:55:34.878385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.359 [2024-11-19 10:55:34.878599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.359 [2024-11-19 10:55:34.878628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.359 [2024-11-19 10:55:34.885077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.359 [2024-11-19 10:55:34.885228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.359 [2024-11-19 10:55:34.885257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.359 [2024-11-19 10:55:34.891387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.359 [2024-11-19 10:55:34.891461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.359 [2024-11-19 10:55:34.891492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.359 [2024-11-19 10:55:34.897251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.359 [2024-11-19 10:55:34.897328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.359 [2024-11-19 10:55:34.897355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.359 [2024-11-19 10:55:34.903391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.359 [2024-11-19 10:55:34.903517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.359 [2024-11-19 10:55:34.903545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.359 [2024-11-19 10:55:34.909673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.359 [2024-11-19 10:55:34.909745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.359 [2024-11-19 10:55:34.909773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.359 [2024-11-19 10:55:34.915815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.359 [2024-11-19 10:55:34.915893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.359 [2024-11-19 10:55:34.915924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.359 [2024-11-19 10:55:34.921834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.359 [2024-11-19 10:55:34.921929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.359 [2024-11-19 10:55:34.921958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.359 [2024-11-19 10:55:34.927824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.359 [2024-11-19 10:55:34.927911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.359 [2024-11-19 10:55:34.927939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.359 [2024-11-19 10:55:34.933740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.359 [2024-11-19 10:55:34.933821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.359 [2024-11-19 10:55:34.933849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.359 [2024-11-19 10:55:34.938485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.359 [2024-11-19 10:55:34.938574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.359 [2024-11-19 10:55:34.938603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.359 [2024-11-19 10:55:34.943132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.359 [2024-11-19 10:55:34.943226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.359 [2024-11-19 10:55:34.943254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.359 [2024-11-19 10:55:34.947865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.359 [2024-11-19 10:55:34.947959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.359 [2024-11-19 10:55:34.947988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.359 [2024-11-19 10:55:34.952589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.359 [2024-11-19 10:55:34.952671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.359 [2024-11-19 10:55:34.952699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.359 [2024-11-19 10:55:34.957283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.359 [2024-11-19 10:55:34.957373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.359 [2024-11-19 10:55:34.957408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.359 [2024-11-19 10:55:34.961971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.359 [2024-11-19 10:55:34.962080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.360 [2024-11-19 10:55:34.962109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.360 [2024-11-19 10:55:34.966626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.360 [2024-11-19 10:55:34.966700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.360 [2024-11-19 10:55:34.966732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.360 [2024-11-19 10:55:34.971317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.360 [2024-11-19 10:55:34.971405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.360 [2024-11-19 10:55:34.971434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.360 [2024-11-19 10:55:34.976086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.360 [2024-11-19 10:55:34.976165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.360 [2024-11-19 10:55:34.976194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.619 [2024-11-19 10:55:34.980815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.619 [2024-11-19 10:55:34.980894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.619 [2024-11-19 10:55:34.980920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.619 [2024-11-19 10:55:34.985495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.619 [2024-11-19 10:55:34.985575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.619 [2024-11-19 10:55:34.985603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.619 [2024-11-19 10:55:34.990162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.619 [2024-11-19 10:55:34.990250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.619 [2024-11-19 10:55:34.990278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.619 [2024-11-19 10:55:34.995199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.619 [2024-11-19 10:55:34.995299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.619 [2024-11-19 10:55:34.995334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.619 [2024-11-19 10:55:35.000805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.619 [2024-11-19 10:55:35.001007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.619 [2024-11-19 10:55:35.001036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.619 [2024-11-19 10:55:35.006637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.619 [2024-11-19 10:55:35.006768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.619 [2024-11-19 10:55:35.006797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.619 [2024-11-19 10:55:35.013046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.619 [2024-11-19 10:55:35.013212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.619 [2024-11-19 10:55:35.013241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.619 5436.00 IOPS, 679.50 MiB/s [2024-11-19T09:55:35.243Z] [2024-11-19 10:55:35.021048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.620 [2024-11-19 10:55:35.021122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.620 [2024-11-19 10:55:35.021154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.620 [2024-11-19 10:55:35.026895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.620 [2024-11-19 10:55:35.027020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.620 [2024-11-19 10:55:35.027047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.620 [2024-11-19 10:55:35.031964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.620 [2024-11-19 10:55:35.032071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.620 [2024-11-19 10:55:35.032104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.620 [2024-11-19 10:55:35.037042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.620 [2024-11-19 10:55:35.037176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.620 [2024-11-19 10:55:35.037205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.620 [2024-11-19 10:55:35.042064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.620 [2024-11-19 10:55:35.042157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.620 [2024-11-19 10:55:35.042185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.620 [2024-11-19 10:55:35.047130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.620 [2024-11-19 10:55:35.047229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.620 [2024-11-19 10:55:35.047258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.620 [2024-11-19 10:55:35.052386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.620 [2024-11-19 10:55:35.052466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.620 [2024-11-19 10:55:35.052495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.620 [2024-11-19 10:55:35.057546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.620 [2024-11-19 10:55:35.057654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.620 [2024-11-19 10:55:35.057682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.620 [2024-11-19 10:55:35.062747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.620 [2024-11-19 10:55:35.062893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.620 [2024-11-19 10:55:35.062923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.620 [2024-11-19 10:55:35.067959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.620 [2024-11-19 10:55:35.068079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.620 [2024-11-19 10:55:35.068108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.620 [2024-11-19 10:55:35.073154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.620 [2024-11-19 10:55:35.073274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.620 [2024-11-19 10:55:35.073309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.620 [2024-11-19 10:55:35.078157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.620 [2024-11-19 10:55:35.078284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.620 [2024-11-19 10:55:35.078320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.620 [2024-11-19 10:55:35.083965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.620 [2024-11-19 10:55:35.084142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.620 [2024-11-19 10:55:35.084170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.620 [2024-11-19 10:55:35.090658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.620 [2024-11-19 10:55:35.090831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.620 [2024-11-19 10:55:35.090860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.620 [2024-11-19 10:55:35.097684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.620 [2024-11-19 10:55:35.097881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.620 [2024-11-19 10:55:35.097920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.620 [2024-11-19 10:55:35.104693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.620 [2024-11-19 10:55:35.104764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.620 [2024-11-19 10:55:35.104791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.620 [2024-11-19 10:55:35.112071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.620 [2024-11-19 10:55:35.112176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.620 [2024-11-19 10:55:35.112205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.620 [2024-11-19 10:55:35.118592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.620 [2024-11-19 10:55:35.118782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.620 [2024-11-19 10:55:35.118810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.620 [2024-11-19 10:55:35.124565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.620 [2024-11-19 10:55:35.124646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.620 [2024-11-19 10:55:35.124676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.620 [2024-11-19 10:55:35.130516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.620 [2024-11-19 10:55:35.130702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.620 [2024-11-19 10:55:35.130732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.620 [2024-11-19 10:55:35.136650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.621 [2024-11-19 10:55:35.136788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.621 [2024-11-19 10:55:35.136818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.621 [2024-11-19 10:55:35.141836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.621 [2024-11-19 10:55:35.141925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.621 [2024-11-19 10:55:35.141954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.621 [2024-11-19 10:55:35.146789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.621 [2024-11-19 10:55:35.146893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.621 [2024-11-19 10:55:35.146922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.621 [2024-11-19 10:55:35.151954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.621 [2024-11-19 10:55:35.152056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.621 [2024-11-19 10:55:35.152085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.621 [2024-11-19 10:55:35.157035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.621 [2024-11-19 10:55:35.157138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.621 [2024-11-19 10:55:35.157166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.621 [2024-11-19 10:55:35.162490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.621 [2024-11-19 10:55:35.162695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.621 [2024-11-19 10:55:35.162724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.621 [2024-11-19 10:55:35.168811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.621 [2024-11-19 10:55:35.168979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.621 [2024-11-19 10:55:35.169008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.621 [2024-11-19 10:55:35.174745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.621 [2024-11-19 10:55:35.174877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.621 [2024-11-19 10:55:35.174906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.621 [2024-11-19 10:55:35.181500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.621 [2024-11-19 10:55:35.181573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.621 [2024-11-19 10:55:35.181600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.621 [2024-11-19 10:55:35.187139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.621 [2024-11-19 10:55:35.187209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.621 [2024-11-19 10:55:35.187236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.621 [2024-11-19 10:55:35.192737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.621 [2024-11-19 10:55:35.192811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.621 [2024-11-19 10:55:35.192839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.621 [2024-11-19 10:55:35.198433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.621 [2024-11-19 10:55:35.198516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.621 [2024-11-19 10:55:35.198545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.621 [2024-11-19 10:55:35.203993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.621 [2024-11-19 10:55:35.204063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.621 [2024-11-19 10:55:35.204090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.621 [2024-11-19 10:55:35.209750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.621 [2024-11-19 10:55:35.209825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.621 [2024-11-19 10:55:35.209852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.621 [2024-11-19 10:55:35.215349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.621 [2024-11-19 10:55:35.215422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.621 [2024-11-19 10:55:35.215448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.621 [2024-11-19 10:55:35.220937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.621 [2024-11-19 10:55:35.221016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.621 [2024-11-19 10:55:35.221045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.621 [2024-11-19 10:55:35.226710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.621 [2024-11-19 10:55:35.226787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.621 [2024-11-19 10:55:35.226816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.621 [2024-11-19 10:55:35.232398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.621 [2024-11-19 10:55:35.232469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.621 [2024-11-19 10:55:35.232496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.621 [2024-11-19 10:55:35.238067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.621 [2024-11-19 10:55:35.238138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.621 [2024-11-19 10:55:35.238165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.882 [2024-11-19 10:55:35.243697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.882 [2024-11-19 10:55:35.243776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.882 [2024-11-19 10:55:35.243805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.882 [2024-11-19 10:55:35.249219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.882 [2024-11-19 10:55:35.249291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.882 [2024-11-19 10:55:35.249334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.882 [2024-11-19 10:55:35.255393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.882 [2024-11-19 10:55:35.255521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.882 [2024-11-19 10:55:35.255549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.882 [2024-11-19 10:55:35.260675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.882 [2024-11-19 10:55:35.260756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.882 [2024-11-19 10:55:35.260785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.882 [2024-11-19 10:55:35.265838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.882 [2024-11-19 10:55:35.265910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.882 [2024-11-19 10:55:35.265937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.882 [2024-11-19 10:55:35.270955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.882 [2024-11-19 10:55:35.271031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.882 [2024-11-19 10:55:35.271059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.882 [2024-11-19 10:55:35.276093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.882 [2024-11-19 10:55:35.276175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.882 [2024-11-19 10:55:35.276204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.882 [2024-11-19 10:55:35.281318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.882 [2024-11-19 10:55:35.281400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.882 [2024-11-19 10:55:35.281427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.882 [2024-11-19 10:55:35.286423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.882 [2024-11-19 10:55:35.286517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.882 [2024-11-19 10:55:35.286543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.882 [2024-11-19 10:55:35.291426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.882 [2024-11-19 10:55:35.291501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.882 [2024-11-19 10:55:35.291530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.882 [2024-11-19 10:55:35.296370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.882 [2024-11-19 10:55:35.296461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.882 [2024-11-19 10:55:35.296488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.882 [2024-11-19 10:55:35.301429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.882 [2024-11-19 10:55:35.301525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.882 [2024-11-19 10:55:35.301553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.882 [2024-11-19 10:55:35.306429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.882 [2024-11-19 10:55:35.306506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.882 [2024-11-19 10:55:35.306532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.882 [2024-11-19 10:55:35.311489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.882 [2024-11-19 10:55:35.311587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.882 [2024-11-19 10:55:35.311616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.882 [2024-11-19 10:55:35.316587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.882 [2024-11-19 10:55:35.316675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.882 [2024-11-19 10:55:35.316703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.882 [2024-11-19 10:55:35.321701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.882 [2024-11-19 10:55:35.321788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.882 [2024-11-19 10:55:35.321816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.882 [2024-11-19 10:55:35.326866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.882 [2024-11-19 10:55:35.326953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.882 [2024-11-19 10:55:35.326982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.882 [2024-11-19 10:55:35.331935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.882 [2024-11-19 10:55:35.332021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.882 [2024-11-19 10:55:35.332050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.883 [2024-11-19 10:55:35.336964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.883 [2024-11-19 10:55:35.337051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.883 [2024-11-19 10:55:35.337079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.883 [2024-11-19 10:55:35.341909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.883 [2024-11-19 10:55:35.341998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.883 [2024-11-19 10:55:35.342026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.883 [2024-11-19 10:55:35.346864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.883 [2024-11-19 10:55:35.346936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.883 [2024-11-19 10:55:35.346963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.883 [2024-11-19 10:55:35.352047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.883 [2024-11-19 10:55:35.352117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.883 [2024-11-19 10:55:35.352145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.883 [2024-11-19 10:55:35.357920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.883 [2024-11-19 10:55:35.357998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.883 [2024-11-19 10:55:35.358027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.883 [2024-11-19 10:55:35.363414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.883 [2024-11-19 10:55:35.363486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.883 [2024-11-19 10:55:35.363518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.883 [2024-11-19 10:55:35.369181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.883 [2024-11-19 10:55:35.369259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.883 [2024-11-19 10:55:35.369287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.883 [2024-11-19 10:55:35.374493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.883 [2024-11-19 10:55:35.374581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.883 [2024-11-19 10:55:35.374609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.883 [2024-11-19 10:55:35.380226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.883 [2024-11-19 10:55:35.380316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.883 [2024-11-19 10:55:35.380345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.883 [2024-11-19 10:55:35.386221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.883 [2024-11-19 10:55:35.386359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.883 [2024-11-19 10:55:35.386395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.883 [2024-11-19 10:55:35.391554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.883 [2024-11-19 10:55:35.391641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.883 [2024-11-19 10:55:35.391669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.883 [2024-11-19 10:55:35.396933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.883 [2024-11-19 10:55:35.397057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.883 [2024-11-19 10:55:35.397085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.883 [2024-11-19 10:55:35.402061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.883 [2024-11-19 10:55:35.402154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.883 [2024-11-19 10:55:35.402183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.883 [2024-11-19 10:55:35.408376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.883 [2024-11-19 10:55:35.408498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.883 [2024-11-19 10:55:35.408527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.883 [2024-11-19 10:55:35.414706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.883 [2024-11-19 10:55:35.414850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.883 [2024-11-19 10:55:35.414879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.883 [2024-11-19 10:55:35.421144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.883 [2024-11-19 10:55:35.421262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.883 [2024-11-19 10:55:35.421291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.883 [2024-11-19 10:55:35.427593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.883 [2024-11-19 10:55:35.427700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.883 [2024-11-19 10:55:35.427729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.883 [2024-11-19 10:55:35.434243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.883 [2024-11-19 10:55:35.434348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.883 [2024-11-19 10:55:35.434377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.883 [2024-11-19 10:55:35.440815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.883 [2024-11-19 10:55:35.440944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.883 [2024-11-19 10:55:35.440973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.883 [2024-11-19 10:55:35.447051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.883 [2024-11-19 10:55:35.447179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.883 [2024-11-19 10:55:35.447208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.884 [2024-11-19 10:55:35.454055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.884 [2024-11-19 10:55:35.454249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.884 [2024-11-19 10:55:35.454278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.884 [2024-11-19 10:55:35.460782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.884 [2024-11-19 10:55:35.460850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.884 [2024-11-19 10:55:35.460877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.884 [2024-11-19 10:55:35.467458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.884 [2024-11-19 10:55:35.467540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.884 [2024-11-19 10:55:35.467569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.884 [2024-11-19 10:55:35.472848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.884 [2024-11-19 10:55:35.472936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.884 [2024-11-19 10:55:35.472966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.884 [2024-11-19 10:55:35.478206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.884 [2024-11-19 10:55:35.478298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.884 [2024-11-19 10:55:35.478334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:47.884 [2024-11-19 10:55:35.483789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.884 [2024-11-19 10:55:35.483866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.884 [2024-11-19 10:55:35.483895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:47.884 [2024-11-19 10:55:35.489323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.884 [2024-11-19 10:55:35.489401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.884 [2024-11-19 10:55:35.489430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:47.884 [2024-11-19 10:55:35.494758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.884 [2024-11-19 10:55:35.494827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.884 [2024-11-19 10:55:35.494853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:47.884 [2024-11-19 10:55:35.500519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:47.884 [2024-11-19 10:55:35.500651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.884 [2024-11-19 10:55:35.500681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.144 [2024-11-19 10:55:35.505792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.144 [2024-11-19 10:55:35.505874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.144 [2024-11-19 10:55:35.505903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.144 [2024-11-19 10:55:35.510765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.144 [2024-11-19 10:55:35.510847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.144 [2024-11-19 10:55:35.510875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.144 [2024-11-19 10:55:35.515847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.144 [2024-11-19 10:55:35.515928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.144 [2024-11-19 10:55:35.515957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.144 [2024-11-19 10:55:35.521254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.144 [2024-11-19 10:55:35.521397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.144 [2024-11-19 10:55:35.521426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.144 [2024-11-19 10:55:35.526392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.144 [2024-11-19 10:55:35.526477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.144 [2024-11-19 10:55:35.526506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.144 [2024-11-19 10:55:35.531433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.144 [2024-11-19 10:55:35.531534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.144 [2024-11-19 10:55:35.531562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.144 [2024-11-19 10:55:35.536986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.144 [2024-11-19 10:55:35.537129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.144 [2024-11-19 10:55:35.537165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.144 [2024-11-19 10:55:35.541989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.144 [2024-11-19 10:55:35.542122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.144 [2024-11-19 10:55:35.542150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.144 [2024-11-19 10:55:35.547129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.144 [2024-11-19 10:55:35.547238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.144 [2024-11-19 10:55:35.547267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.144 [2024-11-19 10:55:35.552143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.144 [2024-11-19 10:55:35.552243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.144 [2024-11-19 10:55:35.552272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.144 [2024-11-19 10:55:35.557058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.144 [2024-11-19 10:55:35.557186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.144 [2024-11-19 10:55:35.557214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.144 [2024-11-19 10:55:35.562067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.144 [2024-11-19 10:55:35.562203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.144 [2024-11-19 10:55:35.562232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.144 [2024-11-19 10:55:35.567192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.144 [2024-11-19 10:55:35.567298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.144 [2024-11-19 10:55:35.567334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.144 [2024-11-19 10:55:35.572447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.144 [2024-11-19 10:55:35.572535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.145 [2024-11-19 10:55:35.572564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.145 [2024-11-19 10:55:35.577536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.145 [2024-11-19 10:55:35.577760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.145 [2024-11-19 10:55:35.577789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.145 [2024-11-19 10:55:35.583918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.145 [2024-11-19 10:55:35.584120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.145 [2024-11-19 10:55:35.584149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.145 [2024-11-19 10:55:35.589745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.145 [2024-11-19 10:55:35.589841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.145 [2024-11-19 10:55:35.589869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.145 [2024-11-19 10:55:35.596259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.145 [2024-11-19 10:55:35.596343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.145 [2024-11-19 10:55:35.596370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.145 [2024-11-19 10:55:35.602184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.145 [2024-11-19 10:55:35.602294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.145 [2024-11-19 10:55:35.602330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.145 [2024-11-19 10:55:35.607843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.145 [2024-11-19 10:55:35.607974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.145 [2024-11-19 10:55:35.608002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.145 [2024-11-19 10:55:35.612964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.145 [2024-11-19 10:55:35.613051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.145 [2024-11-19 10:55:35.613080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.145 [2024-11-19 10:55:35.618879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.145 [2024-11-19 10:55:35.618962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.145 [2024-11-19 10:55:35.618991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.145 [2024-11-19 10:55:35.624339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.145 [2024-11-19 10:55:35.624412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.145 [2024-11-19 10:55:35.624444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.145 [2024-11-19 10:55:35.629954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.145 [2024-11-19 10:55:35.630025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.145 [2024-11-19 10:55:35.630052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.145 [2024-11-19 10:55:35.635031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.145 [2024-11-19 10:55:35.635116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.145 [2024-11-19 10:55:35.635143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.145 [2024-11-19 10:55:35.640929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.145 [2024-11-19 10:55:35.641022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.145 [2024-11-19 10:55:35.641050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.145 [2024-11-19 10:55:35.646255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.145 [2024-11-19 10:55:35.646332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.145 [2024-11-19 10:55:35.646360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.145 [2024-11-19 10:55:35.651357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.145 [2024-11-19 10:55:35.651438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.145 [2024-11-19 10:55:35.651466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.145 [2024-11-19 10:55:35.656523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.145 [2024-11-19 10:55:35.656607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.145 [2024-11-19 10:55:35.656635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.145 [2024-11-19 10:55:35.661807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.145 [2024-11-19 10:55:35.661886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.145 [2024-11-19 10:55:35.661913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.145 [2024-11-19 10:55:35.667013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.145 [2024-11-19 10:55:35.667094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.145 [2024-11-19 10:55:35.667123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.145 [2024-11-19 10:55:35.672217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.145 [2024-11-19 10:55:35.672319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.145 [2024-11-19 10:55:35.672348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.145 [2024-11-19 10:55:35.677133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.145 [2024-11-19 10:55:35.677231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.145 [2024-11-19 10:55:35.677265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.145 [2024-11-19 10:55:35.682173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.145 [2024-11-19 10:55:35.682259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.145 [2024-11-19 10:55:35.682288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.146 [2024-11-19 10:55:35.687491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.146 [2024-11-19 10:55:35.687576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.146 [2024-11-19 10:55:35.687604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.146 [2024-11-19 10:55:35.693205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.146 [2024-11-19 10:55:35.693313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.146 [2024-11-19 10:55:35.693341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.146 [2024-11-19 10:55:35.699780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.146 [2024-11-19 10:55:35.699897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.146 [2024-11-19 10:55:35.699927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.146 [2024-11-19 10:55:35.706208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.146 [2024-11-19 10:55:35.706321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.146 [2024-11-19 10:55:35.706350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.146 [2024-11-19 10:55:35.711483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.146 [2024-11-19 10:55:35.711632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.146 [2024-11-19 10:55:35.711661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.146 [2024-11-19 10:55:35.716490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.146 [2024-11-19 10:55:35.716637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.146 [2024-11-19 10:55:35.716666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.146 [2024-11-19 10:55:35.721589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.146 [2024-11-19 10:55:35.721735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.146 [2024-11-19 10:55:35.721763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.146 [2024-11-19 10:55:35.726716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.146 [2024-11-19 10:55:35.726839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.146 [2024-11-19 10:55:35.726867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.146 [2024-11-19 10:55:35.732144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.146 [2024-11-19 10:55:35.732236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.146 [2024-11-19 10:55:35.732264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.146 [2024-11-19 10:55:35.737436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.146 [2024-11-19 10:55:35.737540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.146 [2024-11-19 10:55:35.737569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.146 [2024-11-19 10:55:35.742733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.146 [2024-11-19 10:55:35.742830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.146 [2024-11-19 10:55:35.742859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.146 [2024-11-19 10:55:35.747846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.146 [2024-11-19 10:55:35.747960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.146 [2024-11-19 10:55:35.747989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.146 [2024-11-19 10:55:35.753131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.146 [2024-11-19 10:55:35.753337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.146 [2024-11-19 10:55:35.753371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.146 [2024-11-19 10:55:35.759465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.146 [2024-11-19 10:55:35.759674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.146 [2024-11-19 10:55:35.759704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.146 [2024-11-19 10:55:35.764678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.146 [2024-11-19 10:55:35.764799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.146 [2024-11-19 10:55:35.764827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.406 [2024-11-19 10:55:35.769651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.406 [2024-11-19 10:55:35.769759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.406 [2024-11-19 10:55:35.769788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.406 [2024-11-19 10:55:35.774644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.406 [2024-11-19 10:55:35.774767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.406 [2024-11-19 10:55:35.774796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.406 [2024-11-19 10:55:35.779787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.406 [2024-11-19 10:55:35.779887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.406 [2024-11-19 10:55:35.779915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.406 [2024-11-19 10:55:35.785028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.406 [2024-11-19 10:55:35.785173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.406 [2024-11-19 10:55:35.785201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.406 [2024-11-19 10:55:35.790095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.406 [2024-11-19 10:55:35.790190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.406 [2024-11-19 10:55:35.790216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.406 [2024-11-19 10:55:35.795407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.406 [2024-11-19 10:55:35.795591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.406 [2024-11-19 10:55:35.795621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.406 [2024-11-19 10:55:35.801617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.406 [2024-11-19 10:55:35.801814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.406 [2024-11-19 10:55:35.801843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.406 [2024-11-19 10:55:35.808373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.406 [2024-11-19 10:55:35.808552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.406 [2024-11-19 10:55:35.808580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.406 [2024-11-19 10:55:35.814896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.406 [2024-11-19 10:55:35.814989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.406 [2024-11-19 10:55:35.815017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.406 [2024-11-19 10:55:35.820132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.406 [2024-11-19 10:55:35.820238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.406 [2024-11-19 10:55:35.820271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.406 [2024-11-19 10:55:35.825644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.406 [2024-11-19 10:55:35.825802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.406 [2024-11-19 10:55:35.825831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.406 [2024-11-19 10:55:35.831042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.407 [2024-11-19 10:55:35.831130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.407 [2024-11-19 10:55:35.831159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.407 [2024-11-19 10:55:35.836026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.407 [2024-11-19 10:55:35.836130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.407 [2024-11-19 10:55:35.836158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.407 [2024-11-19 10:55:35.842048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.407 [2024-11-19 10:55:35.842222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.407 [2024-11-19 10:55:35.842251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.407 [2024-11-19 10:55:35.848426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.407 [2024-11-19 10:55:35.848556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.407 [2024-11-19 10:55:35.848584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.407 [2024-11-19 10:55:35.855459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.407 [2024-11-19 10:55:35.855577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.407 [2024-11-19 10:55:35.855616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.407 [2024-11-19 10:55:35.861881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.407 [2024-11-19 10:55:35.861991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.407 [2024-11-19 10:55:35.862021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.407 [2024-11-19 10:55:35.868413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.407 [2024-11-19 10:55:35.868590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.407 [2024-11-19 10:55:35.868618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.407 [2024-11-19 10:55:35.874738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.407 [2024-11-19 10:55:35.874926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.407 [2024-11-19 10:55:35.874955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.407 [2024-11-19 10:55:35.880198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.407 [2024-11-19 10:55:35.880341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.407 [2024-11-19 10:55:35.880381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.407 [2024-11-19 10:55:35.885289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.407 [2024-11-19 10:55:35.885430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.407 [2024-11-19 10:55:35.885458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.407 [2024-11-19 10:55:35.891770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.407 [2024-11-19 10:55:35.891882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.407 [2024-11-19 10:55:35.891911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.407 [2024-11-19 10:55:35.897491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.407 [2024-11-19 10:55:35.897563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.407 [2024-11-19 10:55:35.897590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.407 [2024-11-19 10:55:35.902465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.407 [2024-11-19 10:55:35.902536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.407 [2024-11-19 10:55:35.902563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.407 [2024-11-19 10:55:35.907616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.407 [2024-11-19 10:55:35.907716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.407 [2024-11-19 10:55:35.907745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.407 [2024-11-19 10:55:35.912492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.407 [2024-11-19 10:55:35.912577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.407 [2024-11-19 10:55:35.912606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.407 [2024-11-19 10:55:35.917558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.407 [2024-11-19 10:55:35.917650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.407 [2024-11-19 10:55:35.917678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.407 [2024-11-19 10:55:35.923733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.407 [2024-11-19 10:55:35.923926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.407 [2024-11-19 10:55:35.923955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.407 [2024-11-19 10:55:35.929627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.407 [2024-11-19 10:55:35.929732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.407 [2024-11-19 10:55:35.929760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.407 [2024-11-19 10:55:35.936759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.407 [2024-11-19 10:55:35.936931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.407 [2024-11-19 10:55:35.936959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.407 [2024-11-19 10:55:35.943104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.407 [2024-11-19 10:55:35.943237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.407 [2024-11-19 10:55:35.943265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.407 [2024-11-19 10:55:35.948479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.407 [2024-11-19 10:55:35.948613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.408 [2024-11-19 10:55:35.948642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.408 [2024-11-19 10:55:35.953615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.408 [2024-11-19 10:55:35.953706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.408 [2024-11-19 10:55:35.953734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.408 [2024-11-19 10:55:35.958654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.408 [2024-11-19 10:55:35.958770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.408 [2024-11-19 10:55:35.958798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.408 [2024-11-19 10:55:35.963666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.408 [2024-11-19 10:55:35.963798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.408 [2024-11-19 10:55:35.963827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.408 [2024-11-19 10:55:35.968622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.408 [2024-11-19 10:55:35.968707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.408 [2024-11-19 10:55:35.968741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.408 [2024-11-19 10:55:35.973549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.408 [2024-11-19 10:55:35.973662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.408 [2024-11-19 10:55:35.973691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.408 [2024-11-19 10:55:35.979346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.408 [2024-11-19 10:55:35.979512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.408 [2024-11-19 10:55:35.979541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.408 [2024-11-19 10:55:35.985921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.408 [2024-11-19 10:55:35.986028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.408 [2024-11-19 10:55:35.986057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.408 [2024-11-19 10:55:35.992136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.408 [2024-11-19 10:55:35.992205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.408 [2024-11-19 10:55:35.992231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.408 [2024-11-19 10:55:35.997180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.408 [2024-11-19 10:55:35.997252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.408 [2024-11-19 10:55:35.997279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.408 [2024-11-19 10:55:36.002067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.408 [2024-11-19 10:55:36.002209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.408 [2024-11-19 10:55:36.002238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.408 [2024-11-19 10:55:36.006985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.408 [2024-11-19 10:55:36.007104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.408 [2024-11-19 10:55:36.007133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.408 [2024-11-19 10:55:36.011857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.408 [2024-11-19 10:55:36.011945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.408 [2024-11-19 10:55:36.011973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.408 [2024-11-19 10:55:36.016562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cf9560) with pdu=0x2000166ff3c8 00:27:48.408 [2024-11-19 10:55:36.018151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.408 [2024-11-19 10:55:36.018181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.408 5506.00 IOPS, 688.25 MiB/s 00:27:48.408 Latency(us) 00:27:48.408 [2024-11-19T09:55:36.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.408 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:48.408 nvme0n1 : 2.00 5505.11 688.14 0.00 0.00 2899.48 1784.04 13689.74 00:27:48.408 [2024-11-19T09:55:36.031Z] =================================================================================================================== 00:27:48.408 [2024-11-19T09:55:36.031Z] Total : 5505.11 688.14 0.00 0.00 2899.48 1784.04 13689.74 00:27:48.408 { 00:27:48.408 "results": [ 00:27:48.408 { 00:27:48.408 "job": "nvme0n1", 00:27:48.408 "core_mask": "0x2", 00:27:48.408 "workload": "randwrite", 00:27:48.408 "status": "finished", 00:27:48.408 "queue_depth": 16, 00:27:48.408 "io_size": 131072, 00:27:48.408 "runtime": 2.003957, 00:27:48.408 "iops": 5505.10814353801, 00:27:48.408 "mibps": 688.1385179422513, 00:27:48.408 "io_failed": 0, 00:27:48.408 "io_timeout": 0, 00:27:48.408 "avg_latency_us": 2899.481316036849, 00:27:48.408 "min_latency_us": 1784.0355555555554, 00:27:48.408 "max_latency_us": 13689.742222222223 00:27:48.408 } 00:27:48.408 ], 00:27:48.408 "core_count": 1 00:27:48.408 } 00:27:48.666 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:48.666 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:48.666 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:48.666 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:48.666 | .driver_specific 00:27:48.666 | .nvme_error 00:27:48.666 | .status_code 00:27:48.666 | .command_transient_transport_error' 00:27:48.925 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 356 > 0 )) 00:27:48.925 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1450646 00:27:48.925 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1450646 ']' 00:27:48.925 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1450646 00:27:48.925 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:48.925 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:48.925 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1450646 00:27:48.925 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:48.925 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:48.925 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1450646' 00:27:48.925 killing process with pid 1450646 00:27:48.925 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1450646 00:27:48.925 Received shutdown signal, test time was about 2.000000 seconds 00:27:48.925 00:27:48.925 Latency(us) 00:27:48.925 [2024-11-19T09:55:36.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.925 [2024-11-19T09:55:36.548Z] =================================================================================================================== 00:27:48.925 [2024-11-19T09:55:36.548Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:48.925 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1450646 00:27:49.184 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1449267 00:27:49.184 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1449267 ']' 00:27:49.184 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1449267 00:27:49.184 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:49.184 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:49.184 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1449267 00:27:49.184 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:49.184 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:49.184 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1449267' 00:27:49.184 killing process with pid 1449267 00:27:49.184 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1449267 00:27:49.184 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1449267 00:27:49.444 00:27:49.444 real 0m15.317s 00:27:49.444 user 0m30.761s 00:27:49.444 sys 0m4.219s 00:27:49.444 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:49.444 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:49.444 ************************************ 00:27:49.444 END TEST nvmf_digest_error 00:27:49.444 ************************************ 00:27:49.444 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:49.444 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:49.444 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:49.444 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:49.444 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:49.444 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:49.444 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:49.444 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:49.444 rmmod nvme_tcp 00:27:49.444 rmmod nvme_fabrics 00:27:49.444 rmmod nvme_keyring 00:27:49.444 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:49.444 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:49.444 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:49.444 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1449267 ']' 00:27:49.444 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1449267 00:27:49.444 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1449267 ']' 00:27:49.444 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1449267 00:27:49.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1449267) - No such process 00:27:49.444 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1449267 is not found' 00:27:49.445 Process with pid 1449267 is not found 00:27:49.445 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:49.445 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:49.445 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:49.445 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:49.445 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:49.445 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:49.445 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:49.445 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:49.445 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:49.445 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.445 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.445 10:55:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.352 10:55:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:51.352 00:27:51.352 real 0m35.841s 00:27:51.352 user 1m2.822s 00:27:51.352 sys 0m10.528s 00:27:51.352 10:55:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:51.352 10:55:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:51.352 ************************************ 00:27:51.352 END TEST nvmf_digest 00:27:51.352 ************************************ 00:27:51.610 10:55:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:51.610 10:55:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:51.610 10:55:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:51.610 10:55:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:51.610 10:55:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:51.610 10:55:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:51.610 10:55:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.610 ************************************ 00:27:51.610 START TEST nvmf_bdevperf 00:27:51.610 ************************************ 00:27:51.610 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:51.610 * Looking for test storage... 00:27:51.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:51.610 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:51.610 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:51.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.611 --rc genhtml_branch_coverage=1 00:27:51.611 --rc genhtml_function_coverage=1 00:27:51.611 --rc genhtml_legend=1 00:27:51.611 --rc geninfo_all_blocks=1 00:27:51.611 --rc geninfo_unexecuted_blocks=1 00:27:51.611 00:27:51.611 ' 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:51.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.611 --rc genhtml_branch_coverage=1 00:27:51.611 --rc genhtml_function_coverage=1 00:27:51.611 --rc genhtml_legend=1 00:27:51.611 --rc geninfo_all_blocks=1 00:27:51.611 --rc geninfo_unexecuted_blocks=1 00:27:51.611 00:27:51.611 ' 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:51.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.611 --rc genhtml_branch_coverage=1 00:27:51.611 --rc genhtml_function_coverage=1 00:27:51.611 --rc genhtml_legend=1 00:27:51.611 --rc geninfo_all_blocks=1 00:27:51.611 --rc geninfo_unexecuted_blocks=1 00:27:51.611 00:27:51.611 ' 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:51.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.611 --rc genhtml_branch_coverage=1 00:27:51.611 --rc genhtml_function_coverage=1 00:27:51.611 --rc genhtml_legend=1 00:27:51.611 --rc geninfo_all_blocks=1 00:27:51.611 --rc geninfo_unexecuted_blocks=1 00:27:51.611 00:27:51.611 ' 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.611 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:51.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:51.612 10:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:54.146 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:54.146 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:54.146 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:54.147 Found net devices under 0000:09:00.0: cvl_0_0 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:54.147 Found net devices under 0000:09:00.1: cvl_0_1 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:54.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:54.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:27:54.147 00:27:54.147 --- 10.0.0.2 ping statistics --- 00:27:54.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.147 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:54.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:54.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:27:54.147 00:27:54.147 --- 10.0.0.1 ping statistics --- 00:27:54.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.147 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1452999 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1452999 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1452999 ']' 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:54.147 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:54.147 [2024-11-19 10:55:41.408921] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:27:54.147 [2024-11-19 10:55:41.409013] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:54.147 [2024-11-19 10:55:41.484354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:54.147 [2024-11-19 10:55:41.544777] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:54.147 [2024-11-19 10:55:41.544841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:54.147 [2024-11-19 10:55:41.544855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:54.147 [2024-11-19 10:55:41.544866] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:54.147 [2024-11-19 10:55:41.544876] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:54.147 [2024-11-19 10:55:41.546394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:54.147 [2024-11-19 10:55:41.546459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:54.147 [2024-11-19 10:55:41.546463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:54.148 [2024-11-19 10:55:41.697025] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:54.148 Malloc0 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.148 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:54.148 [2024-11-19 10:55:41.766683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:54.406 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.406 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:54.406 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:54.406 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:54.406 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:54.406 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:54.406 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:54.406 { 00:27:54.406 "params": { 00:27:54.406 "name": "Nvme$subsystem", 00:27:54.406 "trtype": "$TEST_TRANSPORT", 00:27:54.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.406 "adrfam": "ipv4", 00:27:54.406 "trsvcid": "$NVMF_PORT", 00:27:54.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.406 "hdgst": ${hdgst:-false}, 00:27:54.406 "ddgst": ${ddgst:-false} 00:27:54.406 }, 00:27:54.406 "method": "bdev_nvme_attach_controller" 00:27:54.406 } 00:27:54.406 EOF 00:27:54.406 )") 00:27:54.406 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:54.406 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:54.406 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:54.406 10:55:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:54.406 "params": { 00:27:54.406 "name": "Nvme1", 00:27:54.406 "trtype": "tcp", 00:27:54.406 "traddr": "10.0.0.2", 00:27:54.406 "adrfam": "ipv4", 00:27:54.406 "trsvcid": "4420", 00:27:54.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:54.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:54.406 "hdgst": false, 00:27:54.406 "ddgst": false 00:27:54.406 }, 00:27:54.406 "method": "bdev_nvme_attach_controller" 00:27:54.406 }' 00:27:54.406 [2024-11-19 10:55:41.820219] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:27:54.406 [2024-11-19 10:55:41.820300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1453141 ] 00:27:54.406 [2024-11-19 10:55:41.890268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.406 [2024-11-19 10:55:41.952555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.664 Running I/O for 1 seconds... 00:27:55.856 8475.00 IOPS, 33.11 MiB/s 00:27:55.856 Latency(us) 00:27:55.856 [2024-11-19T09:55:43.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.856 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:55.856 Verification LBA range: start 0x0 length 0x4000 00:27:55.856 Nvme1n1 : 1.05 8226.61 32.14 0.00 0.00 14896.10 3155.44 44273.21 00:27:55.856 [2024-11-19T09:55:43.479Z] =================================================================================================================== 00:27:55.856 [2024-11-19T09:55:43.479Z] Total : 8226.61 32.14 0.00 0.00 14896.10 3155.44 44273.21 00:27:55.856 10:55:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1453286 00:27:55.856 10:55:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:55.856 10:55:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:55.856 10:55:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:55.856 10:55:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:55.856 10:55:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:55.856 10:55:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:55.856 10:55:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:55.856 { 00:27:55.856 "params": { 00:27:55.856 "name": "Nvme$subsystem", 00:27:55.856 "trtype": "$TEST_TRANSPORT", 00:27:55.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.856 "adrfam": "ipv4", 00:27:55.856 "trsvcid": "$NVMF_PORT", 00:27:55.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.856 "hdgst": ${hdgst:-false}, 00:27:55.856 "ddgst": ${ddgst:-false} 00:27:55.856 }, 00:27:55.856 "method": "bdev_nvme_attach_controller" 00:27:55.856 } 00:27:55.856 EOF 00:27:55.856 )") 00:27:55.856 10:55:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:56.114 10:55:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:56.114 10:55:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:56.114 10:55:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:56.114 "params": { 00:27:56.114 "name": "Nvme1", 00:27:56.114 "trtype": "tcp", 00:27:56.114 "traddr": "10.0.0.2", 00:27:56.114 "adrfam": "ipv4", 00:27:56.114 "trsvcid": "4420", 00:27:56.114 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:56.114 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:56.114 "hdgst": false, 00:27:56.114 "ddgst": false 00:27:56.114 }, 00:27:56.114 "method": "bdev_nvme_attach_controller" 00:27:56.114 }' 00:27:56.114 [2024-11-19 10:55:43.520674] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:27:56.114 [2024-11-19 10:55:43.520749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1453286 ] 00:27:56.114 [2024-11-19 10:55:43.588391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.114 [2024-11-19 10:55:43.647543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.372 Running I/O for 15 seconds... 00:27:58.240 8513.00 IOPS, 33.25 MiB/s [2024-11-19T09:55:46.799Z] 8518.50 IOPS, 33.28 MiB/s [2024-11-19T09:55:46.799Z] 10:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1452999 00:27:59.176 10:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:59.176 [2024-11-19 10:55:46.483763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.176 [2024-11-19 10:55:46.483811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.176 [2024-11-19 10:55:46.483856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.176 [2024-11-19 10:55:46.483882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.176 [2024-11-19 10:55:46.483900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.176 [2024-11-19 10:55:46.483916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.176 [2024-11-19 10:55:46.483933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.176 [2024-11-19 10:55:46.483949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.176 [2024-11-19 10:55:46.483982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.176 [2024-11-19 10:55:46.483997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.176 [2024-11-19 10:55:46.484012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.176 [2024-11-19 10:55:46.484025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.176 [2024-11-19 10:55:46.484054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.176 [2024-11-19 10:55:46.484068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.176 [2024-11-19 10:55:46.484083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.176 [2024-11-19 10:55:46.484096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.176 [2024-11-19 10:55:46.484111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.176 [2024-11-19 10:55:46.484125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.176 [2024-11-19 10:55:46.484140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.176 [2024-11-19 10:55:46.484153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.176 [2024-11-19 10:55:46.484168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.176 [2024-11-19 10:55:46.484181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.176 [2024-11-19 10:55:46.484209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.176 [2024-11-19 10:55:46.484223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.176 [2024-11-19 10:55:46.484237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.176 [2024-11-19 10:55:46.484250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.176 [2024-11-19 10:55:46.484263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.176 [2024-11-19 10:55:46.484276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.176 [2024-11-19 10:55:46.484321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.176 [2024-11-19 10:55:46.484337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.176 [2024-11-19 10:55:46.484352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.176 [2024-11-19 10:55:46.484366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.176 [2024-11-19 10:55:46.484381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.176 [2024-11-19 10:55:46.484395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.484410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.484424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.484439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.484452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.484468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.484483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.484498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.484512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.484526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.484540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.484556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.484570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.484601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.484615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.484630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.484643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.484671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.484683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.484696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.484712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.484726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.484739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.484752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.484764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.484778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.484790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.484803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.484816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.484829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.484841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.484854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.484866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.484879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.484892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.484905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.484917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.484937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.484950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.484963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.484975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.484988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.485001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.485014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.485027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.485043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.485056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.485069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.485081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.485094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.485106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.485119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.485131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.485145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.485157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.485171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.485183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.485196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.485208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.485221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.177 [2024-11-19 10:55:46.485233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.177 [2024-11-19 10:55:46.485246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.178 [2024-11-19 10:55:46.485258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.485981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.485994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.486006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.486020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.486032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.486045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.486057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.486070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.486081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.178 [2024-11-19 10:55:46.486098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.178 [2024-11-19 10:55:46.486111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.179 [2024-11-19 10:55:46.486872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.179 [2024-11-19 10:55:46.486897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.179 [2024-11-19 10:55:46.486923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.179 [2024-11-19 10:55:46.486948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.179 [2024-11-19 10:55:46.486973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.486987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.179 [2024-11-19 10:55:46.486999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.179 [2024-11-19 10:55:46.487012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.179 [2024-11-19 10:55:46.487024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.180 [2024-11-19 10:55:46.487038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.180 [2024-11-19 10:55:46.487050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.180 [2024-11-19 10:55:46.487063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.180 [2024-11-19 10:55:46.487075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.180 [2024-11-19 10:55:46.487088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.180 [2024-11-19 10:55:46.487101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.180 [2024-11-19 10:55:46.487114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.180 [2024-11-19 10:55:46.487126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.180 [2024-11-19 10:55:46.487139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.180 [2024-11-19 10:55:46.487152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.180 [2024-11-19 10:55:46.487174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.180 [2024-11-19 10:55:46.487188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.180 [2024-11-19 10:55:46.487201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.180 [2024-11-19 10:55:46.487213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.180 [2024-11-19 10:55:46.487226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.180 [2024-11-19 10:55:46.487238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.180 [2024-11-19 10:55:46.487252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.180 [2024-11-19 10:55:46.487264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.180 [2024-11-19 10:55:46.487277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.180 [2024-11-19 10:55:46.487312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.180 [2024-11-19 10:55:46.487329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.180 [2024-11-19 10:55:46.487343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.180 [2024-11-19 10:55:46.487359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.180 [2024-11-19 10:55:46.487373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.180 [2024-11-19 10:55:46.487387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.180 [2024-11-19 10:55:46.487400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.180 [2024-11-19 10:55:46.487415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.180 [2024-11-19 10:55:46.487429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.180 [2024-11-19 10:55:46.487444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.180 [2024-11-19 10:55:46.487458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.180 [2024-11-19 10:55:46.487472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.180 [2024-11-19 10:55:46.487486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.180 [2024-11-19 10:55:46.487501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.180 [2024-11-19 10:55:46.487515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.180 [2024-11-19 10:55:46.487529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1ba0 is same with the state(6) to be set 00:27:59.180 [2024-11-19 10:55:46.487550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.180 [2024-11-19 10:55:46.487562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.180 [2024-11-19 10:55:46.487574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48648 len:8 PRP1 0x0 PRP2 0x0 00:27:59.180 [2024-11-19 10:55:46.487589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.180 [2024-11-19 10:55:46.490743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.180 [2024-11-19 10:55:46.490825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.180 [2024-11-19 10:55:46.491609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.180 [2024-11-19 10:55:46.491654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.180 [2024-11-19 10:55:46.491670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.180 [2024-11-19 10:55:46.491938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.180 [2024-11-19 10:55:46.492131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.180 [2024-11-19 10:55:46.492150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.180 [2024-11-19 10:55:46.492165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.180 [2024-11-19 10:55:46.492178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.180 [2024-11-19 10:55:46.504240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.180 [2024-11-19 10:55:46.504649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.180 [2024-11-19 10:55:46.504677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.180 [2024-11-19 10:55:46.504692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.180 [2024-11-19 10:55:46.504933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.180 [2024-11-19 10:55:46.505125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.180 [2024-11-19 10:55:46.505142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.180 [2024-11-19 10:55:46.505155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.180 [2024-11-19 10:55:46.505166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.180 [2024-11-19 10:55:46.517475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.180 [2024-11-19 10:55:46.517864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.180 [2024-11-19 10:55:46.517907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.180 [2024-11-19 10:55:46.517921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.180 [2024-11-19 10:55:46.518171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.181 [2024-11-19 10:55:46.518426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.181 [2024-11-19 10:55:46.518448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.181 [2024-11-19 10:55:46.518466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.181 [2024-11-19 10:55:46.518479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.181 [2024-11-19 10:55:46.530657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.181 [2024-11-19 10:55:46.531083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.181 [2024-11-19 10:55:46.531110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.181 [2024-11-19 10:55:46.531125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.181 [2024-11-19 10:55:46.531373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.181 [2024-11-19 10:55:46.531615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.181 [2024-11-19 10:55:46.531635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.181 [2024-11-19 10:55:46.531647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.181 [2024-11-19 10:55:46.531659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.181 [2024-11-19 10:55:46.543666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.181 [2024-11-19 10:55:46.544064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.181 [2024-11-19 10:55:46.544093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.181 [2024-11-19 10:55:46.544108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.181 [2024-11-19 10:55:46.544346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.181 [2024-11-19 10:55:46.544561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.181 [2024-11-19 10:55:46.544580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.181 [2024-11-19 10:55:46.544592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.181 [2024-11-19 10:55:46.544604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.181 [2024-11-19 10:55:46.556694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.181 [2024-11-19 10:55:46.557068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.181 [2024-11-19 10:55:46.557110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.181 [2024-11-19 10:55:46.557125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.181 [2024-11-19 10:55:46.557396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.181 [2024-11-19 10:55:46.557623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.181 [2024-11-19 10:55:46.557642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.181 [2024-11-19 10:55:46.557669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.181 [2024-11-19 10:55:46.557681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.181 [2024-11-19 10:55:46.569796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.181 [2024-11-19 10:55:46.570140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.181 [2024-11-19 10:55:46.570169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.181 [2024-11-19 10:55:46.570200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.181 [2024-11-19 10:55:46.570450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.181 [2024-11-19 10:55:46.570679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.181 [2024-11-19 10:55:46.570697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.181 [2024-11-19 10:55:46.570708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.181 [2024-11-19 10:55:46.570720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.181 [2024-11-19 10:55:46.582761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.181 [2024-11-19 10:55:46.583188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.181 [2024-11-19 10:55:46.583230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.181 [2024-11-19 10:55:46.583246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.181 [2024-11-19 10:55:46.583497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.181 [2024-11-19 10:55:46.583711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.181 [2024-11-19 10:55:46.583729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.181 [2024-11-19 10:55:46.583741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.181 [2024-11-19 10:55:46.583753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.181 [2024-11-19 10:55:46.595863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.181 [2024-11-19 10:55:46.596354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.181 [2024-11-19 10:55:46.596396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.181 [2024-11-19 10:55:46.596411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.181 [2024-11-19 10:55:46.596662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.181 [2024-11-19 10:55:46.596869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.181 [2024-11-19 10:55:46.596887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.181 [2024-11-19 10:55:46.596898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.181 [2024-11-19 10:55:46.596910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.181 [2024-11-19 10:55:46.608854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.181 [2024-11-19 10:55:46.609231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.181 [2024-11-19 10:55:46.609277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.181 [2024-11-19 10:55:46.609292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.181 [2024-11-19 10:55:46.609539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.181 [2024-11-19 10:55:46.609793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.181 [2024-11-19 10:55:46.609812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.181 [2024-11-19 10:55:46.609823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.181 [2024-11-19 10:55:46.609834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.181 [2024-11-19 10:55:46.622073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.181 [2024-11-19 10:55:46.622527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.181 [2024-11-19 10:55:46.622556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.181 [2024-11-19 10:55:46.622571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.182 [2024-11-19 10:55:46.622812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.182 [2024-11-19 10:55:46.623020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.182 [2024-11-19 10:55:46.623038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.182 [2024-11-19 10:55:46.623050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.182 [2024-11-19 10:55:46.623061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.182 [2024-11-19 10:55:46.635175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.182 [2024-11-19 10:55:46.635550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.182 [2024-11-19 10:55:46.635594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.182 [2024-11-19 10:55:46.635609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.182 [2024-11-19 10:55:46.635876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.182 [2024-11-19 10:55:46.636068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.182 [2024-11-19 10:55:46.636086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.182 [2024-11-19 10:55:46.636097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.182 [2024-11-19 10:55:46.636108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.182 [2024-11-19 10:55:46.648245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.182 [2024-11-19 10:55:46.648633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.182 [2024-11-19 10:55:46.648675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.182 [2024-11-19 10:55:46.648690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.182 [2024-11-19 10:55:46.648937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.182 [2024-11-19 10:55:46.649129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.182 [2024-11-19 10:55:46.649147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.182 [2024-11-19 10:55:46.649159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.182 [2024-11-19 10:55:46.649171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.182 [2024-11-19 10:55:46.661318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.182 [2024-11-19 10:55:46.661725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.182 [2024-11-19 10:55:46.661753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.182 [2024-11-19 10:55:46.661769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.182 [2024-11-19 10:55:46.661989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.182 [2024-11-19 10:55:46.662197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.182 [2024-11-19 10:55:46.662215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.182 [2024-11-19 10:55:46.662226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.182 [2024-11-19 10:55:46.662238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.182 [2024-11-19 10:55:46.674501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.182 [2024-11-19 10:55:46.674839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.182 [2024-11-19 10:55:46.674866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.182 [2024-11-19 10:55:46.674880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.182 [2024-11-19 10:55:46.675098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.182 [2024-11-19 10:55:46.675332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.182 [2024-11-19 10:55:46.675366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.182 [2024-11-19 10:55:46.675379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.182 [2024-11-19 10:55:46.675390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.182 [2024-11-19 10:55:46.687495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.182 [2024-11-19 10:55:46.687861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.182 [2024-11-19 10:55:46.687904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.182 [2024-11-19 10:55:46.687920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.182 [2024-11-19 10:55:46.688171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.182 [2024-11-19 10:55:46.688405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.182 [2024-11-19 10:55:46.688425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.182 [2024-11-19 10:55:46.688442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.182 [2024-11-19 10:55:46.688454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.182 [2024-11-19 10:55:46.700542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.182 [2024-11-19 10:55:46.700920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.182 [2024-11-19 10:55:46.700946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.182 [2024-11-19 10:55:46.700961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.182 [2024-11-19 10:55:46.701160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.182 [2024-11-19 10:55:46.701409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.182 [2024-11-19 10:55:46.701429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.183 [2024-11-19 10:55:46.701441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.183 [2024-11-19 10:55:46.701453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.183 [2024-11-19 10:55:46.713573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.183 [2024-11-19 10:55:46.713896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.183 [2024-11-19 10:55:46.713921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.183 [2024-11-19 10:55:46.713936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.183 [2024-11-19 10:55:46.714171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.183 [2024-11-19 10:55:46.714424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.183 [2024-11-19 10:55:46.714444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.183 [2024-11-19 10:55:46.714456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.183 [2024-11-19 10:55:46.714468] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.183 [2024-11-19 10:55:46.726667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.183 [2024-11-19 10:55:46.727036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.183 [2024-11-19 10:55:46.727063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.183 [2024-11-19 10:55:46.727078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.183 [2024-11-19 10:55:46.727296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.183 [2024-11-19 10:55:46.727538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.183 [2024-11-19 10:55:46.727557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.183 [2024-11-19 10:55:46.727569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.183 [2024-11-19 10:55:46.727581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.183 [2024-11-19 10:55:46.739780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.183 [2024-11-19 10:55:46.740162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.183 [2024-11-19 10:55:46.740190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.183 [2024-11-19 10:55:46.740206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.183 [2024-11-19 10:55:46.740429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.183 [2024-11-19 10:55:46.740678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.183 [2024-11-19 10:55:46.740697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.183 [2024-11-19 10:55:46.740709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.183 [2024-11-19 10:55:46.740721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.183 [2024-11-19 10:55:46.753514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.183 [2024-11-19 10:55:46.753931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.183 [2024-11-19 10:55:46.753959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.183 [2024-11-19 10:55:46.753974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.183 [2024-11-19 10:55:46.754187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.183 [2024-11-19 10:55:46.754445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.183 [2024-11-19 10:55:46.754466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.183 [2024-11-19 10:55:46.754480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.183 [2024-11-19 10:55:46.754492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.183 [2024-11-19 10:55:46.766734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.183 [2024-11-19 10:55:46.767207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.183 [2024-11-19 10:55:46.767259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.183 [2024-11-19 10:55:46.767274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.183 [2024-11-19 10:55:46.767536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.183 [2024-11-19 10:55:46.767751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.183 [2024-11-19 10:55:46.767770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.183 [2024-11-19 10:55:46.767782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.183 [2024-11-19 10:55:46.767793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.183 [2024-11-19 10:55:46.779768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.183 [2024-11-19 10:55:46.780131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.183 [2024-11-19 10:55:46.780203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.183 [2024-11-19 10:55:46.780240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.183 [2024-11-19 10:55:46.780508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.183 [2024-11-19 10:55:46.780757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.183 [2024-11-19 10:55:46.780775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.183 [2024-11-19 10:55:46.780787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.183 [2024-11-19 10:55:46.780799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.183 [2024-11-19 10:55:46.793423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.183 [2024-11-19 10:55:46.793832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.183 [2024-11-19 10:55:46.793885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.183 [2024-11-19 10:55:46.793900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.183 [2024-11-19 10:55:46.794157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.183 [2024-11-19 10:55:46.794385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.183 [2024-11-19 10:55:46.794407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.183 [2024-11-19 10:55:46.794419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.183 [2024-11-19 10:55:46.794432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.443 [2024-11-19 10:55:46.806559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.443 [2024-11-19 10:55:46.807020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.443 [2024-11-19 10:55:46.807074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.443 [2024-11-19 10:55:46.807090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.443 [2024-11-19 10:55:46.807339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.443 [2024-11-19 10:55:46.807558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.443 [2024-11-19 10:55:46.807578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.443 [2024-11-19 10:55:46.807591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.443 [2024-11-19 10:55:46.807604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.443 [2024-11-19 10:55:46.819601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.443 [2024-11-19 10:55:46.820010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.443 [2024-11-19 10:55:46.820062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.443 [2024-11-19 10:55:46.820077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.443 [2024-11-19 10:55:46.820371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.443 [2024-11-19 10:55:46.820583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.444 [2024-11-19 10:55:46.820618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.444 [2024-11-19 10:55:46.820630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.444 [2024-11-19 10:55:46.820642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.444 [2024-11-19 10:55:46.832567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.444 [2024-11-19 10:55:46.832926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.444 [2024-11-19 10:55:46.832953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.444 [2024-11-19 10:55:46.832967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.444 [2024-11-19 10:55:46.833181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.444 [2024-11-19 10:55:46.833431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.444 [2024-11-19 10:55:46.833451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.444 [2024-11-19 10:55:46.833464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.444 [2024-11-19 10:55:46.833476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.444 [2024-11-19 10:55:46.845579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.444 [2024-11-19 10:55:46.845913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.444 [2024-11-19 10:55:46.845940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.444 [2024-11-19 10:55:46.845955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.444 [2024-11-19 10:55:46.846177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.444 [2024-11-19 10:55:46.846429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.444 [2024-11-19 10:55:46.846449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.444 [2024-11-19 10:55:46.846462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.444 [2024-11-19 10:55:46.846474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.444 7467.33 IOPS, 29.17 MiB/s [2024-11-19T09:55:47.067Z] [2024-11-19 10:55:46.858647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.444 [2024-11-19 10:55:46.859136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.444 [2024-11-19 10:55:46.859178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.444 [2024-11-19 10:55:46.859194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.444 [2024-11-19 10:55:46.859475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.444 [2024-11-19 10:55:46.859710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.444 [2024-11-19 10:55:46.859733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.444 [2024-11-19 10:55:46.859745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.444 [2024-11-19 10:55:46.859757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.444 [2024-11-19 10:55:46.871805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.444 [2024-11-19 10:55:46.872183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.444 [2024-11-19 10:55:46.872226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.444 [2024-11-19 10:55:46.872241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.444 [2024-11-19 10:55:46.872493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.444 [2024-11-19 10:55:46.872738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.444 [2024-11-19 10:55:46.872756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.444 [2024-11-19 10:55:46.872768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.444 [2024-11-19 10:55:46.872779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.444 [2024-11-19 10:55:46.884920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.444 [2024-11-19 10:55:46.885346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.444 [2024-11-19 10:55:46.885387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.444 [2024-11-19 10:55:46.885403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.444 [2024-11-19 10:55:46.885641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.444 [2024-11-19 10:55:46.885847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.444 [2024-11-19 10:55:46.885866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.444 [2024-11-19 10:55:46.885877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.444 [2024-11-19 10:55:46.885888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.444 [2024-11-19 10:55:46.898130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.444 [2024-11-19 10:55:46.898520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.444 [2024-11-19 10:55:46.898548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.444 [2024-11-19 10:55:46.898563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.444 [2024-11-19 10:55:46.898804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.444 [2024-11-19 10:55:46.899012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.444 [2024-11-19 10:55:46.899030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.444 [2024-11-19 10:55:46.899042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.444 [2024-11-19 10:55:46.899054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.444 [2024-11-19 10:55:46.911381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.444 [2024-11-19 10:55:46.911820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.444 [2024-11-19 10:55:46.911861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.444 [2024-11-19 10:55:46.911877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.444 [2024-11-19 10:55:46.912116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.444 [2024-11-19 10:55:46.912342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.444 [2024-11-19 10:55:46.912362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.444 [2024-11-19 10:55:46.912376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.444 [2024-11-19 10:55:46.912388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.444 [2024-11-19 10:55:46.924718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.444 [2024-11-19 10:55:46.925019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.444 [2024-11-19 10:55:46.925058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.444 [2024-11-19 10:55:46.925074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.444 [2024-11-19 10:55:46.925288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.445 [2024-11-19 10:55:46.925515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.445 [2024-11-19 10:55:46.925535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.445 [2024-11-19 10:55:46.925547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.445 [2024-11-19 10:55:46.925559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.445 [2024-11-19 10:55:46.937948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.445 [2024-11-19 10:55:46.938318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.445 [2024-11-19 10:55:46.938347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.445 [2024-11-19 10:55:46.938362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.445 [2024-11-19 10:55:46.938602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.445 [2024-11-19 10:55:46.938794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.445 [2024-11-19 10:55:46.938812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.445 [2024-11-19 10:55:46.938823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.445 [2024-11-19 10:55:46.938835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.445 [2024-11-19 10:55:46.950994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.445 [2024-11-19 10:55:46.951483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.445 [2024-11-19 10:55:46.951529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.445 [2024-11-19 10:55:46.951546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.445 [2024-11-19 10:55:46.951811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.445 [2024-11-19 10:55:46.952003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.445 [2024-11-19 10:55:46.952022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.445 [2024-11-19 10:55:46.952033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.445 [2024-11-19 10:55:46.952044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.445 [2024-11-19 10:55:46.964093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.445 [2024-11-19 10:55:46.964511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.445 [2024-11-19 10:55:46.964556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.445 [2024-11-19 10:55:46.964572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.445 [2024-11-19 10:55:46.964828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.445 [2024-11-19 10:55:46.965020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.445 [2024-11-19 10:55:46.965037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.445 [2024-11-19 10:55:46.965050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.445 [2024-11-19 10:55:46.965061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.445 [2024-11-19 10:55:46.977392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.445 [2024-11-19 10:55:46.977757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.445 [2024-11-19 10:55:46.977784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.445 [2024-11-19 10:55:46.977799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.445 [2024-11-19 10:55:46.978020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.445 [2024-11-19 10:55:46.978227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.445 [2024-11-19 10:55:46.978244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.445 [2024-11-19 10:55:46.978256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.445 [2024-11-19 10:55:46.978268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.445 [2024-11-19 10:55:46.990521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.445 [2024-11-19 10:55:46.990887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.445 [2024-11-19 10:55:46.990914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.445 [2024-11-19 10:55:46.990929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.445 [2024-11-19 10:55:46.991155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.445 [2024-11-19 10:55:46.991436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.445 [2024-11-19 10:55:46.991458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.445 [2024-11-19 10:55:46.991471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.445 [2024-11-19 10:55:46.991484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.445 [2024-11-19 10:55:47.003681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.445 [2024-11-19 10:55:47.004044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.445 [2024-11-19 10:55:47.004071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.445 [2024-11-19 10:55:47.004086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.445 [2024-11-19 10:55:47.004314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.445 [2024-11-19 10:55:47.004530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.445 [2024-11-19 10:55:47.004549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.445 [2024-11-19 10:55:47.004560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.445 [2024-11-19 10:55:47.004572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.445 [2024-11-19 10:55:47.016820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.445 [2024-11-19 10:55:47.017152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.445 [2024-11-19 10:55:47.017179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.445 [2024-11-19 10:55:47.017193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.445 [2024-11-19 10:55:47.017443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.445 [2024-11-19 10:55:47.017670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.445 [2024-11-19 10:55:47.017689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.445 [2024-11-19 10:55:47.017702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.445 [2024-11-19 10:55:47.017714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.445 [2024-11-19 10:55:47.029910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.445 [2024-11-19 10:55:47.030335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.445 [2024-11-19 10:55:47.030379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.445 [2024-11-19 10:55:47.030394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.445 [2024-11-19 10:55:47.030657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.446 [2024-11-19 10:55:47.030848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.446 [2024-11-19 10:55:47.030871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.446 [2024-11-19 10:55:47.030884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.446 [2024-11-19 10:55:47.030895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.446 [2024-11-19 10:55:47.043016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.446 [2024-11-19 10:55:47.043384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.446 [2024-11-19 10:55:47.043426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.446 [2024-11-19 10:55:47.043442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.446 [2024-11-19 10:55:47.043689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.446 [2024-11-19 10:55:47.043897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.446 [2024-11-19 10:55:47.043914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.446 [2024-11-19 10:55:47.043926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.446 [2024-11-19 10:55:47.043938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.446 [2024-11-19 10:55:47.056041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.446 [2024-11-19 10:55:47.056531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.446 [2024-11-19 10:55:47.056574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.446 [2024-11-19 10:55:47.056590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.446 [2024-11-19 10:55:47.056839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.446 [2024-11-19 10:55:47.057046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.446 [2024-11-19 10:55:47.057064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.446 [2024-11-19 10:55:47.057075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.446 [2024-11-19 10:55:47.057087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.705 [2024-11-19 10:55:47.069240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.705 [2024-11-19 10:55:47.069605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.705 [2024-11-19 10:55:47.069648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.705 [2024-11-19 10:55:47.069662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.705 [2024-11-19 10:55:47.069915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.705 [2024-11-19 10:55:47.070133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.705 [2024-11-19 10:55:47.070153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.705 [2024-11-19 10:55:47.070167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.705 [2024-11-19 10:55:47.070180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.705 [2024-11-19 10:55:47.082210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.705 [2024-11-19 10:55:47.082569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.705 [2024-11-19 10:55:47.082597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.705 [2024-11-19 10:55:47.082613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.705 [2024-11-19 10:55:47.082848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.705 [2024-11-19 10:55:47.083054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.705 [2024-11-19 10:55:47.083073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.705 [2024-11-19 10:55:47.083085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.705 [2024-11-19 10:55:47.083096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.705 [2024-11-19 10:55:47.095277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.705 [2024-11-19 10:55:47.095708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.705 [2024-11-19 10:55:47.095735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.705 [2024-11-19 10:55:47.095751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.705 [2024-11-19 10:55:47.095984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.705 [2024-11-19 10:55:47.096191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.705 [2024-11-19 10:55:47.096209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.705 [2024-11-19 10:55:47.096221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.705 [2024-11-19 10:55:47.096233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.705 [2024-11-19 10:55:47.108317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.705 [2024-11-19 10:55:47.108681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.705 [2024-11-19 10:55:47.108723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.705 [2024-11-19 10:55:47.108739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.705 [2024-11-19 10:55:47.108991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.705 [2024-11-19 10:55:47.109198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.705 [2024-11-19 10:55:47.109216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.705 [2024-11-19 10:55:47.109228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.706 [2024-11-19 10:55:47.109239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.706 [2024-11-19 10:55:47.121526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.706 [2024-11-19 10:55:47.121943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.706 [2024-11-19 10:55:47.121989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.706 [2024-11-19 10:55:47.122006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.706 [2024-11-19 10:55:47.122226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.706 [2024-11-19 10:55:47.122466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.706 [2024-11-19 10:55:47.122485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.706 [2024-11-19 10:55:47.122498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.706 [2024-11-19 10:55:47.122509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.706 [2024-11-19 10:55:47.134747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.706 [2024-11-19 10:55:47.135109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.706 [2024-11-19 10:55:47.135156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.706 [2024-11-19 10:55:47.135171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.706 [2024-11-19 10:55:47.135446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.706 [2024-11-19 10:55:47.135645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.706 [2024-11-19 10:55:47.135678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.706 [2024-11-19 10:55:47.135691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.706 [2024-11-19 10:55:47.135702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.706 [2024-11-19 10:55:47.148313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.706 [2024-11-19 10:55:47.148731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.706 [2024-11-19 10:55:47.148774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.706 [2024-11-19 10:55:47.148789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.706 [2024-11-19 10:55:47.149045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.706 [2024-11-19 10:55:47.149278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.706 [2024-11-19 10:55:47.149322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.706 [2024-11-19 10:55:47.149337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.706 [2024-11-19 10:55:47.149364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.706 [2024-11-19 10:55:47.161587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.706 [2024-11-19 10:55:47.161949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.706 [2024-11-19 10:55:47.161992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.706 [2024-11-19 10:55:47.162008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.706 [2024-11-19 10:55:47.162280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.706 [2024-11-19 10:55:47.162502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.706 [2024-11-19 10:55:47.162522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.706 [2024-11-19 10:55:47.162534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.706 [2024-11-19 10:55:47.162546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.706 [2024-11-19 10:55:47.174829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.706 [2024-11-19 10:55:47.175195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.706 [2024-11-19 10:55:47.175238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.706 [2024-11-19 10:55:47.175253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.706 [2024-11-19 10:55:47.175515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.706 [2024-11-19 10:55:47.175727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.706 [2024-11-19 10:55:47.175745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.706 [2024-11-19 10:55:47.175757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.706 [2024-11-19 10:55:47.175768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.706 [2024-11-19 10:55:47.188043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.706 [2024-11-19 10:55:47.188376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.706 [2024-11-19 10:55:47.188404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.706 [2024-11-19 10:55:47.188419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.706 [2024-11-19 10:55:47.188640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.706 [2024-11-19 10:55:47.188850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.706 [2024-11-19 10:55:47.188876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.706 [2024-11-19 10:55:47.188887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.706 [2024-11-19 10:55:47.188898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.706 [2024-11-19 10:55:47.201310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.706 [2024-11-19 10:55:47.201749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.706 [2024-11-19 10:55:47.201803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.706 [2024-11-19 10:55:47.201817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.706 [2024-11-19 10:55:47.202028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.706 [2024-11-19 10:55:47.202221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.706 [2024-11-19 10:55:47.202244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.706 [2024-11-19 10:55:47.202256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.706 [2024-11-19 10:55:47.202268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.706 [2024-11-19 10:55:47.214482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.706 [2024-11-19 10:55:47.214866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.706 [2024-11-19 10:55:47.214908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.707 [2024-11-19 10:55:47.214923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.707 [2024-11-19 10:55:47.215174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.707 [2024-11-19 10:55:47.215410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.707 [2024-11-19 10:55:47.215430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.707 [2024-11-19 10:55:47.215442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.707 [2024-11-19 10:55:47.215454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.707 [2024-11-19 10:55:47.227944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.707 [2024-11-19 10:55:47.228370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.707 [2024-11-19 10:55:47.228399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.707 [2024-11-19 10:55:47.228415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.707 [2024-11-19 10:55:47.228644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.707 [2024-11-19 10:55:47.228874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.707 [2024-11-19 10:55:47.228893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.707 [2024-11-19 10:55:47.228905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.707 [2024-11-19 10:55:47.228917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.707 [2024-11-19 10:55:47.241311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.707 [2024-11-19 10:55:47.241684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.707 [2024-11-19 10:55:47.241726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.707 [2024-11-19 10:55:47.241741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.707 [2024-11-19 10:55:47.241962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.707 [2024-11-19 10:55:47.242194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.707 [2024-11-19 10:55:47.242214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.707 [2024-11-19 10:55:47.242227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.707 [2024-11-19 10:55:47.242239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.707 [2024-11-19 10:55:47.254616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.707 [2024-11-19 10:55:47.255092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.707 [2024-11-19 10:55:47.255144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.707 [2024-11-19 10:55:47.255159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.707 [2024-11-19 10:55:47.255439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.707 [2024-11-19 10:55:47.255681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.707 [2024-11-19 10:55:47.255699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.707 [2024-11-19 10:55:47.255711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.707 [2024-11-19 10:55:47.255723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.707 [2024-11-19 10:55:47.267916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.707 [2024-11-19 10:55:47.268288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.707 [2024-11-19 10:55:47.268324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.707 [2024-11-19 10:55:47.268341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.707 [2024-11-19 10:55:47.268568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.707 [2024-11-19 10:55:47.268785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.707 [2024-11-19 10:55:47.268804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.707 [2024-11-19 10:55:47.268816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.707 [2024-11-19 10:55:47.268828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.707 [2024-11-19 10:55:47.281202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.707 [2024-11-19 10:55:47.281544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.707 [2024-11-19 10:55:47.281572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.707 [2024-11-19 10:55:47.281587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.707 [2024-11-19 10:55:47.281802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.707 [2024-11-19 10:55:47.282016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.707 [2024-11-19 10:55:47.282035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.707 [2024-11-19 10:55:47.282046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.707 [2024-11-19 10:55:47.282058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.707 [2024-11-19 10:55:47.294312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.707 [2024-11-19 10:55:47.294787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.707 [2024-11-19 10:55:47.294843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.707 [2024-11-19 10:55:47.294858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.707 [2024-11-19 10:55:47.295124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.707 [2024-11-19 10:55:47.295349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.707 [2024-11-19 10:55:47.295382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.707 [2024-11-19 10:55:47.295395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.707 [2024-11-19 10:55:47.295407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.707 [2024-11-19 10:55:47.307486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.707 [2024-11-19 10:55:47.307886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.707 [2024-11-19 10:55:47.307927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.707 [2024-11-19 10:55:47.307942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.707 [2024-11-19 10:55:47.308189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.707 [2024-11-19 10:55:47.308411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.708 [2024-11-19 10:55:47.308431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.708 [2024-11-19 10:55:47.308443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.708 [2024-11-19 10:55:47.308455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.708 [2024-11-19 10:55:47.320647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.708 [2024-11-19 10:55:47.320979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.708 [2024-11-19 10:55:47.321007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.708 [2024-11-19 10:55:47.321022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.708 [2024-11-19 10:55:47.321264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.708 [2024-11-19 10:55:47.321526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.708 [2024-11-19 10:55:47.321547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.708 [2024-11-19 10:55:47.321559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.708 [2024-11-19 10:55:47.321571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.966 [2024-11-19 10:55:47.334115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.966 [2024-11-19 10:55:47.334550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-11-19 10:55:47.334592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.966 [2024-11-19 10:55:47.334608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.966 [2024-11-19 10:55:47.334854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.966 [2024-11-19 10:55:47.335051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.966 [2024-11-19 10:55:47.335070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.967 [2024-11-19 10:55:47.335082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.967 [2024-11-19 10:55:47.335094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.967 [2024-11-19 10:55:47.347446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.967 [2024-11-19 10:55:47.347837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-11-19 10:55:47.347878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.967 [2024-11-19 10:55:47.347893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.967 [2024-11-19 10:55:47.348140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.967 [2024-11-19 10:55:47.348382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.967 [2024-11-19 10:55:47.348403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.967 [2024-11-19 10:55:47.348416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.967 [2024-11-19 10:55:47.348429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.967 [2024-11-19 10:55:47.360783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.967 [2024-11-19 10:55:47.361148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-11-19 10:55:47.361191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.967 [2024-11-19 10:55:47.361206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.967 [2024-11-19 10:55:47.361470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.967 [2024-11-19 10:55:47.361681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.967 [2024-11-19 10:55:47.361700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.967 [2024-11-19 10:55:47.361711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.967 [2024-11-19 10:55:47.361722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.967 [2024-11-19 10:55:47.373997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.967 [2024-11-19 10:55:47.374329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-11-19 10:55:47.374356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.967 [2024-11-19 10:55:47.374371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.967 [2024-11-19 10:55:47.374594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.967 [2024-11-19 10:55:47.374802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.967 [2024-11-19 10:55:47.374820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.967 [2024-11-19 10:55:47.374837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.967 [2024-11-19 10:55:47.374849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.967 [2024-11-19 10:55:47.387233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.967 [2024-11-19 10:55:47.387696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-11-19 10:55:47.387738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.967 [2024-11-19 10:55:47.387753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.967 [2024-11-19 10:55:47.387989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.967 [2024-11-19 10:55:47.388181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.967 [2024-11-19 10:55:47.388199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.967 [2024-11-19 10:55:47.388211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.967 [2024-11-19 10:55:47.388222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.967 [2024-11-19 10:55:47.400244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.967 [2024-11-19 10:55:47.400633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-11-19 10:55:47.400675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.967 [2024-11-19 10:55:47.400691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.967 [2024-11-19 10:55:47.400946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.967 [2024-11-19 10:55:47.401153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.967 [2024-11-19 10:55:47.401171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.967 [2024-11-19 10:55:47.401183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.967 [2024-11-19 10:55:47.401194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.967 [2024-11-19 10:55:47.413395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.967 [2024-11-19 10:55:47.413816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-11-19 10:55:47.413844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.967 [2024-11-19 10:55:47.413860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.967 [2024-11-19 10:55:47.414090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.967 [2024-11-19 10:55:47.414333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.967 [2024-11-19 10:55:47.414366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.967 [2024-11-19 10:55:47.414379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.967 [2024-11-19 10:55:47.414391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.968 [2024-11-19 10:55:47.426567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.968 [2024-11-19 10:55:47.426958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-11-19 10:55:47.427001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.968 [2024-11-19 10:55:47.427017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.968 [2024-11-19 10:55:47.427257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.968 [2024-11-19 10:55:47.427498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.968 [2024-11-19 10:55:47.427518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.968 [2024-11-19 10:55:47.427530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.968 [2024-11-19 10:55:47.427542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.968 [2024-11-19 10:55:47.439581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.968 [2024-11-19 10:55:47.439977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-11-19 10:55:47.440004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.968 [2024-11-19 10:55:47.440020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.968 [2024-11-19 10:55:47.440239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.968 [2024-11-19 10:55:47.440482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.968 [2024-11-19 10:55:47.440502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.968 [2024-11-19 10:55:47.440514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.968 [2024-11-19 10:55:47.440526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.968 [2024-11-19 10:55:47.452675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.968 [2024-11-19 10:55:47.453102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-11-19 10:55:47.453143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.968 [2024-11-19 10:55:47.453159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.968 [2024-11-19 10:55:47.453411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.968 [2024-11-19 10:55:47.453630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.968 [2024-11-19 10:55:47.453649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.968 [2024-11-19 10:55:47.453675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.968 [2024-11-19 10:55:47.453687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.968 [2024-11-19 10:55:47.465712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.968 [2024-11-19 10:55:47.466073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-11-19 10:55:47.466104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.968 [2024-11-19 10:55:47.466120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.968 [2024-11-19 10:55:47.466366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.968 [2024-11-19 10:55:47.466576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.968 [2024-11-19 10:55:47.466610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.968 [2024-11-19 10:55:47.466622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.968 [2024-11-19 10:55:47.466635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.968 [2024-11-19 10:55:47.478863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.968 [2024-11-19 10:55:47.479266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-11-19 10:55:47.479293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.968 [2024-11-19 10:55:47.479335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.968 [2024-11-19 10:55:47.479578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.968 [2024-11-19 10:55:47.479804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.968 [2024-11-19 10:55:47.479822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.968 [2024-11-19 10:55:47.479834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.968 [2024-11-19 10:55:47.479845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.968 [2024-11-19 10:55:47.491871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.968 [2024-11-19 10:55:47.492236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-11-19 10:55:47.492263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.968 [2024-11-19 10:55:47.492278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.968 [2024-11-19 10:55:47.492516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.968 [2024-11-19 10:55:47.492760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.968 [2024-11-19 10:55:47.492779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.968 [2024-11-19 10:55:47.492791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.968 [2024-11-19 10:55:47.492804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.968 [2024-11-19 10:55:47.505263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.968 [2024-11-19 10:55:47.505625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-11-19 10:55:47.505667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.968 [2024-11-19 10:55:47.505682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.968 [2024-11-19 10:55:47.505920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.968 [2024-11-19 10:55:47.506112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.968 [2024-11-19 10:55:47.506130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.968 [2024-11-19 10:55:47.506142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.968 [2024-11-19 10:55:47.506153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.969 [2024-11-19 10:55:47.518374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.969 [2024-11-19 10:55:47.518733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-11-19 10:55:47.518760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.969 [2024-11-19 10:55:47.518775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.969 [2024-11-19 10:55:47.519009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.969 [2024-11-19 10:55:47.519216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.969 [2024-11-19 10:55:47.519235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.969 [2024-11-19 10:55:47.519246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.969 [2024-11-19 10:55:47.519257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.969 [2024-11-19 10:55:47.531489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.969 [2024-11-19 10:55:47.531886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-11-19 10:55:47.531913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.969 [2024-11-19 10:55:47.531928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.969 [2024-11-19 10:55:47.532147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.969 [2024-11-19 10:55:47.532383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.969 [2024-11-19 10:55:47.532403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.969 [2024-11-19 10:55:47.532415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.969 [2024-11-19 10:55:47.532427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.969 [2024-11-19 10:55:47.544584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.969 [2024-11-19 10:55:47.545009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-11-19 10:55:47.545036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.969 [2024-11-19 10:55:47.545052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.969 [2024-11-19 10:55:47.545286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.969 [2024-11-19 10:55:47.545525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.969 [2024-11-19 10:55:47.545546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.969 [2024-11-19 10:55:47.545567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.969 [2024-11-19 10:55:47.545597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.969 [2024-11-19 10:55:47.557713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.969 [2024-11-19 10:55:47.558200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-11-19 10:55:47.558241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.969 [2024-11-19 10:55:47.558257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.969 [2024-11-19 10:55:47.558506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.969 [2024-11-19 10:55:47.558739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.969 [2024-11-19 10:55:47.558757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.969 [2024-11-19 10:55:47.558769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.969 [2024-11-19 10:55:47.558780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.969 [2024-11-19 10:55:47.570715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.969 [2024-11-19 10:55:47.571204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-11-19 10:55:47.571246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.969 [2024-11-19 10:55:47.571262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.969 [2024-11-19 10:55:47.571501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.969 [2024-11-19 10:55:47.571756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.969 [2024-11-19 10:55:47.571775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.969 [2024-11-19 10:55:47.571787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.969 [2024-11-19 10:55:47.571799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:59.969 [2024-11-19 10:55:47.583777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:59.969 [2024-11-19 10:55:47.584218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-11-19 10:55:47.584246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:27:59.969 [2024-11-19 10:55:47.584261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:27:59.969 [2024-11-19 10:55:47.584484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:27:59.969 [2024-11-19 10:55:47.584754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:59.969 [2024-11-19 10:55:47.584773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:59.969 [2024-11-19 10:55:47.584785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:59.969 [2024-11-19 10:55:47.584796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.228 [2024-11-19 10:55:47.596912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.228 [2024-11-19 10:55:47.597277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.228 [2024-11-19 10:55:47.597325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.228 [2024-11-19 10:55:47.597343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.228 [2024-11-19 10:55:47.597612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.228 [2024-11-19 10:55:47.597804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.228 [2024-11-19 10:55:47.597821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.228 [2024-11-19 10:55:47.597833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.228 [2024-11-19 10:55:47.597845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.228 [2024-11-19 10:55:47.610015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.229 [2024-11-19 10:55:47.610394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.229 [2024-11-19 10:55:47.610420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.229 [2024-11-19 10:55:47.610435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.229 [2024-11-19 10:55:47.610635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.229 [2024-11-19 10:55:47.610858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.229 [2024-11-19 10:55:47.610876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.229 [2024-11-19 10:55:47.610888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.229 [2024-11-19 10:55:47.610900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.229 [2024-11-19 10:55:47.623082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.229 [2024-11-19 10:55:47.623467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.229 [2024-11-19 10:55:47.623494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.229 [2024-11-19 10:55:47.623509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.229 [2024-11-19 10:55:47.623708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.229 [2024-11-19 10:55:47.623932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.229 [2024-11-19 10:55:47.623950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.229 [2024-11-19 10:55:47.623961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.229 [2024-11-19 10:55:47.623972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.229 [2024-11-19 10:55:47.636107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.229 [2024-11-19 10:55:47.636477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.229 [2024-11-19 10:55:47.636509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.229 [2024-11-19 10:55:47.636525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.229 [2024-11-19 10:55:47.636743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.229 [2024-11-19 10:55:47.636949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.229 [2024-11-19 10:55:47.636967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.229 [2024-11-19 10:55:47.636978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.229 [2024-11-19 10:55:47.636989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.229 [2024-11-19 10:55:47.649248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.229 [2024-11-19 10:55:47.649637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.229 [2024-11-19 10:55:47.649680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.229 [2024-11-19 10:55:47.649695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.229 [2024-11-19 10:55:47.649928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.229 [2024-11-19 10:55:47.650135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.229 [2024-11-19 10:55:47.650153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.229 [2024-11-19 10:55:47.650165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.229 [2024-11-19 10:55:47.650177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.229 [2024-11-19 10:55:47.662322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.229 [2024-11-19 10:55:47.662748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.229 [2024-11-19 10:55:47.662789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.229 [2024-11-19 10:55:47.662805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.229 [2024-11-19 10:55:47.663045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.229 [2024-11-19 10:55:47.663252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.229 [2024-11-19 10:55:47.663270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.229 [2024-11-19 10:55:47.663282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.229 [2024-11-19 10:55:47.663293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.229 [2024-11-19 10:55:47.675327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.229 [2024-11-19 10:55:47.675814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.229 [2024-11-19 10:55:47.675856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.229 [2024-11-19 10:55:47.675872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.229 [2024-11-19 10:55:47.676128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.229 [2024-11-19 10:55:47.676361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.229 [2024-11-19 10:55:47.676380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.229 [2024-11-19 10:55:47.676392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.229 [2024-11-19 10:55:47.676404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.229 [2024-11-19 10:55:47.688331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.229 [2024-11-19 10:55:47.688695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.229 [2024-11-19 10:55:47.688737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.229 [2024-11-19 10:55:47.688752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.229 [2024-11-19 10:55:47.689003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.229 [2024-11-19 10:55:47.689195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.229 [2024-11-19 10:55:47.689212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.229 [2024-11-19 10:55:47.689224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.229 [2024-11-19 10:55:47.689235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.229 [2024-11-19 10:55:47.701402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.229 [2024-11-19 10:55:47.701732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.230 [2024-11-19 10:55:47.701759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.230 [2024-11-19 10:55:47.701773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.230 [2024-11-19 10:55:47.701993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.230 [2024-11-19 10:55:47.702201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.230 [2024-11-19 10:55:47.702219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.230 [2024-11-19 10:55:47.702231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.230 [2024-11-19 10:55:47.702242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.230 [2024-11-19 10:55:47.714472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.230 [2024-11-19 10:55:47.714977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.230 [2024-11-19 10:55:47.715003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.230 [2024-11-19 10:55:47.715034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.230 [2024-11-19 10:55:47.715283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.230 [2024-11-19 10:55:47.715509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.230 [2024-11-19 10:55:47.715529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.230 [2024-11-19 10:55:47.715547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.230 [2024-11-19 10:55:47.715559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.230 [2024-11-19 10:55:47.727644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.230 [2024-11-19 10:55:47.728004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.230 [2024-11-19 10:55:47.728031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.230 [2024-11-19 10:55:47.728046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.230 [2024-11-19 10:55:47.728266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.230 [2024-11-19 10:55:47.728506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.230 [2024-11-19 10:55:47.728526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.230 [2024-11-19 10:55:47.728539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.230 [2024-11-19 10:55:47.728551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.230 [2024-11-19 10:55:47.740759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.230 [2024-11-19 10:55:47.741126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.230 [2024-11-19 10:55:47.741168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.230 [2024-11-19 10:55:47.741183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.230 [2024-11-19 10:55:47.741436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.230 [2024-11-19 10:55:47.741661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.230 [2024-11-19 10:55:47.741693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.230 [2024-11-19 10:55:47.741705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.230 [2024-11-19 10:55:47.741716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.230 [2024-11-19 10:55:47.753867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.230 [2024-11-19 10:55:47.754205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.230 [2024-11-19 10:55:47.754233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.230 [2024-11-19 10:55:47.754248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.230 [2024-11-19 10:55:47.754485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.230 [2024-11-19 10:55:47.754713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.230 [2024-11-19 10:55:47.754732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.230 [2024-11-19 10:55:47.754745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.230 [2024-11-19 10:55:47.754757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.230 [2024-11-19 10:55:47.767099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.230 [2024-11-19 10:55:47.767496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.230 [2024-11-19 10:55:47.767523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.230 [2024-11-19 10:55:47.767537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.230 [2024-11-19 10:55:47.767773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.230 [2024-11-19 10:55:47.767980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.230 [2024-11-19 10:55:47.767999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.230 [2024-11-19 10:55:47.768011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.230 [2024-11-19 10:55:47.768022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.230 [2024-11-19 10:55:47.780233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.230 [2024-11-19 10:55:47.780617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.230 [2024-11-19 10:55:47.780659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.230 [2024-11-19 10:55:47.780673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.230 [2024-11-19 10:55:47.780901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.230 [2024-11-19 10:55:47.781093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.230 [2024-11-19 10:55:47.781111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.230 [2024-11-19 10:55:47.781123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.230 [2024-11-19 10:55:47.781134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.230 [2024-11-19 10:55:47.793267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.230 [2024-11-19 10:55:47.793636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.230 [2024-11-19 10:55:47.793679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.230 [2024-11-19 10:55:47.793694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.230 [2024-11-19 10:55:47.793949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.230 [2024-11-19 10:55:47.794156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.231 [2024-11-19 10:55:47.794173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.231 [2024-11-19 10:55:47.794185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.231 [2024-11-19 10:55:47.794196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.231 [2024-11-19 10:55:47.806359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.231 [2024-11-19 10:55:47.806702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.231 [2024-11-19 10:55:47.806734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.231 [2024-11-19 10:55:47.806750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.231 [2024-11-19 10:55:47.806970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.231 [2024-11-19 10:55:47.807178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.231 [2024-11-19 10:55:47.807196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.231 [2024-11-19 10:55:47.807208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.231 [2024-11-19 10:55:47.807219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.231 [2024-11-19 10:55:47.819388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.231 [2024-11-19 10:55:47.819812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.231 [2024-11-19 10:55:47.819838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.231 [2024-11-19 10:55:47.819853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.231 [2024-11-19 10:55:47.820086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.231 [2024-11-19 10:55:47.820293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.231 [2024-11-19 10:55:47.820336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.231 [2024-11-19 10:55:47.820350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.231 [2024-11-19 10:55:47.820361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.231 [2024-11-19 10:55:47.832584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.231 [2024-11-19 10:55:47.832931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.231 [2024-11-19 10:55:47.832958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.231 [2024-11-19 10:55:47.832973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.231 [2024-11-19 10:55:47.833207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.231 [2024-11-19 10:55:47.833462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.231 [2024-11-19 10:55:47.833483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.231 [2024-11-19 10:55:47.833496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.231 [2024-11-19 10:55:47.833507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.231 [2024-11-19 10:55:47.845845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.231 [2024-11-19 10:55:47.846320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.231 [2024-11-19 10:55:47.846348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.231 [2024-11-19 10:55:47.846363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.231 [2024-11-19 10:55:47.846608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.231 [2024-11-19 10:55:47.846815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.231 [2024-11-19 10:55:47.846833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.231 [2024-11-19 10:55:47.846845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.231 [2024-11-19 10:55:47.846857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.490 5600.50 IOPS, 21.88 MiB/s [2024-11-19T09:55:48.113Z] [2024-11-19 10:55:47.859078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.490 [2024-11-19 10:55:47.859449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-11-19 10:55:47.859477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.490 [2024-11-19 10:55:47.859493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.490 [2024-11-19 10:55:47.859731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.490 [2024-11-19 10:55:47.859938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.490 [2024-11-19 10:55:47.859956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.490 [2024-11-19 10:55:47.859967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.490 [2024-11-19 10:55:47.859979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.490 [2024-11-19 10:55:47.872131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.490 [2024-11-19 10:55:47.872520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-11-19 10:55:47.872547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.490 [2024-11-19 10:55:47.872563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.490 [2024-11-19 10:55:47.872796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.491 [2024-11-19 10:55:47.873004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.491 [2024-11-19 10:55:47.873022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.491 [2024-11-19 10:55:47.873034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.491 [2024-11-19 10:55:47.873045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.491 [2024-11-19 10:55:47.885176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.491 [2024-11-19 10:55:47.885517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-11-19 10:55:47.885545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.491 [2024-11-19 10:55:47.885560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.491 [2024-11-19 10:55:47.885780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.491 [2024-11-19 10:55:47.885987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.491 [2024-11-19 10:55:47.886010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.491 [2024-11-19 10:55:47.886023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.491 [2024-11-19 10:55:47.886034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.491 [2024-11-19 10:55:47.898252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.491 [2024-11-19 10:55:47.898645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-11-19 10:55:47.898688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.491 [2024-11-19 10:55:47.898703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.491 [2024-11-19 10:55:47.898936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.491 [2024-11-19 10:55:47.899143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.491 [2024-11-19 10:55:47.899161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.491 [2024-11-19 10:55:47.899173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.491 [2024-11-19 10:55:47.899184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.491 [2024-11-19 10:55:47.911416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.491 [2024-11-19 10:55:47.911752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-11-19 10:55:47.911779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.491 [2024-11-19 10:55:47.911794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.491 [2024-11-19 10:55:47.912009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.491 [2024-11-19 10:55:47.912218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.491 [2024-11-19 10:55:47.912236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.491 [2024-11-19 10:55:47.912247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.491 [2024-11-19 10:55:47.912259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.491 [2024-11-19 10:55:47.924637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.491 [2024-11-19 10:55:47.925130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-11-19 10:55:47.925172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.491 [2024-11-19 10:55:47.925188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.491 [2024-11-19 10:55:47.925454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.491 [2024-11-19 10:55:47.925672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.491 [2024-11-19 10:55:47.925690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.491 [2024-11-19 10:55:47.925701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.491 [2024-11-19 10:55:47.925713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.491 [2024-11-19 10:55:47.937769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.491 [2024-11-19 10:55:47.938127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-11-19 10:55:47.938153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.491 [2024-11-19 10:55:47.938167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.491 [2024-11-19 10:55:47.938405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.491 [2024-11-19 10:55:47.938599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.491 [2024-11-19 10:55:47.938617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.491 [2024-11-19 10:55:47.938628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.491 [2024-11-19 10:55:47.938639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.491 [2024-11-19 10:55:47.950875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.491 [2024-11-19 10:55:47.951299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-11-19 10:55:47.951348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.491 [2024-11-19 10:55:47.951365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.491 [2024-11-19 10:55:47.951603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.491 [2024-11-19 10:55:47.951811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.491 [2024-11-19 10:55:47.951829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.491 [2024-11-19 10:55:47.951841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.491 [2024-11-19 10:55:47.951852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.491 [2024-11-19 10:55:47.964003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.491 [2024-11-19 10:55:47.964431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-11-19 10:55:47.964459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.491 [2024-11-19 10:55:47.964474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.491 [2024-11-19 10:55:47.964718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.491 [2024-11-19 10:55:47.964927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.491 [2024-11-19 10:55:47.964945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.491 [2024-11-19 10:55:47.964957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.491 [2024-11-19 10:55:47.964969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.492 [2024-11-19 10:55:47.977181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.492 [2024-11-19 10:55:47.977573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-11-19 10:55:47.977606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.492 [2024-11-19 10:55:47.977622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.492 [2024-11-19 10:55:47.977863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.492 [2024-11-19 10:55:47.978071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.492 [2024-11-19 10:55:47.978089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.492 [2024-11-19 10:55:47.978101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.492 [2024-11-19 10:55:47.978112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.492 [2024-11-19 10:55:47.990556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.492 [2024-11-19 10:55:47.990936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-11-19 10:55:47.990964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.492 [2024-11-19 10:55:47.990980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.492 [2024-11-19 10:55:47.991223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.492 [2024-11-19 10:55:47.991476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.492 [2024-11-19 10:55:47.991497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.492 [2024-11-19 10:55:47.991509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.492 [2024-11-19 10:55:47.991521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.492 [2024-11-19 10:55:48.003842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.492 [2024-11-19 10:55:48.004283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-11-19 10:55:48.004320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.492 [2024-11-19 10:55:48.004338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.492 [2024-11-19 10:55:48.004551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.492 [2024-11-19 10:55:48.004779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.492 [2024-11-19 10:55:48.004799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.492 [2024-11-19 10:55:48.004827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.492 [2024-11-19 10:55:48.004839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.492 [2024-11-19 10:55:48.017270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.492 [2024-11-19 10:55:48.017747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-11-19 10:55:48.017776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.492 [2024-11-19 10:55:48.017791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.492 [2024-11-19 10:55:48.018038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.492 [2024-11-19 10:55:48.018236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.492 [2024-11-19 10:55:48.018255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.492 [2024-11-19 10:55:48.018267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.492 [2024-11-19 10:55:48.018279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.492 [2024-11-19 10:55:48.030722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.492 [2024-11-19 10:55:48.031092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-11-19 10:55:48.031119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.492 [2024-11-19 10:55:48.031134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.492 [2024-11-19 10:55:48.031383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.492 [2024-11-19 10:55:48.031609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.492 [2024-11-19 10:55:48.031628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.492 [2024-11-19 10:55:48.031641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.492 [2024-11-19 10:55:48.031668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.492 [2024-11-19 10:55:48.043960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.492 [2024-11-19 10:55:48.044271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-11-19 10:55:48.044320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.492 [2024-11-19 10:55:48.044337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.492 [2024-11-19 10:55:48.044589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.492 [2024-11-19 10:55:48.044806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.492 [2024-11-19 10:55:48.044825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.492 [2024-11-19 10:55:48.044838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.492 [2024-11-19 10:55:48.044849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.492 [2024-11-19 10:55:48.057092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.492 [2024-11-19 10:55:48.057514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-11-19 10:55:48.057543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.492 [2024-11-19 10:55:48.057559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.492 [2024-11-19 10:55:48.057789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.492 [2024-11-19 10:55:48.058004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.492 [2024-11-19 10:55:48.058028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.492 [2024-11-19 10:55:48.058041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.492 [2024-11-19 10:55:48.058053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.492 [2024-11-19 10:55:48.070342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.492 [2024-11-19 10:55:48.070734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-11-19 10:55:48.070763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.492 [2024-11-19 10:55:48.070779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.492 [2024-11-19 10:55:48.071020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.492 [2024-11-19 10:55:48.071219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.492 [2024-11-19 10:55:48.071238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.492 [2024-11-19 10:55:48.071250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.492 [2024-11-19 10:55:48.071262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.492 [2024-11-19 10:55:48.083566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.492 [2024-11-19 10:55:48.083876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-11-19 10:55:48.083903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.493 [2024-11-19 10:55:48.083918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.493 [2024-11-19 10:55:48.084139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.493 [2024-11-19 10:55:48.084399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.493 [2024-11-19 10:55:48.084423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.493 [2024-11-19 10:55:48.084437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.493 [2024-11-19 10:55:48.084450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.493 [2024-11-19 10:55:48.096882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.493 [2024-11-19 10:55:48.097207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-11-19 10:55:48.097234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.493 [2024-11-19 10:55:48.097249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.493 [2024-11-19 10:55:48.097500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.493 [2024-11-19 10:55:48.097736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.493 [2024-11-19 10:55:48.097754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.493 [2024-11-19 10:55:48.097767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.493 [2024-11-19 10:55:48.097779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.493 [2024-11-19 10:55:48.110548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.493 [2024-11-19 10:55:48.110953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-11-19 10:55:48.110981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.493 [2024-11-19 10:55:48.110996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.493 [2024-11-19 10:55:48.111210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.752 [2024-11-19 10:55:48.111437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.752 [2024-11-19 10:55:48.111458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.752 [2024-11-19 10:55:48.111471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.752 [2024-11-19 10:55:48.111484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.752 [2024-11-19 10:55:48.123890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.752 [2024-11-19 10:55:48.124262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.752 [2024-11-19 10:55:48.124289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.752 [2024-11-19 10:55:48.124313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.752 [2024-11-19 10:55:48.124543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.752 [2024-11-19 10:55:48.124777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.752 [2024-11-19 10:55:48.124795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.752 [2024-11-19 10:55:48.124807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.752 [2024-11-19 10:55:48.124819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.752 [2024-11-19 10:55:48.137066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.752 [2024-11-19 10:55:48.137415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.752 [2024-11-19 10:55:48.137443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.752 [2024-11-19 10:55:48.137458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.753 [2024-11-19 10:55:48.137685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.753 [2024-11-19 10:55:48.137899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.753 [2024-11-19 10:55:48.137917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.753 [2024-11-19 10:55:48.137929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.753 [2024-11-19 10:55:48.137941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.753 [2024-11-19 10:55:48.150348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.753 [2024-11-19 10:55:48.150679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.753 [2024-11-19 10:55:48.150725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.753 [2024-11-19 10:55:48.150742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.753 [2024-11-19 10:55:48.150964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.753 [2024-11-19 10:55:48.151178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.753 [2024-11-19 10:55:48.151197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.753 [2024-11-19 10:55:48.151209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.753 [2024-11-19 10:55:48.151221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.753 [2024-11-19 10:55:48.163758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.753 [2024-11-19 10:55:48.164129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.753 [2024-11-19 10:55:48.164171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.753 [2024-11-19 10:55:48.164187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.753 [2024-11-19 10:55:48.164436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.753 [2024-11-19 10:55:48.164668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.753 [2024-11-19 10:55:48.164686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.753 [2024-11-19 10:55:48.164698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.753 [2024-11-19 10:55:48.164710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.753 [2024-11-19 10:55:48.176941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.753 [2024-11-19 10:55:48.177313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.753 [2024-11-19 10:55:48.177357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.753 [2024-11-19 10:55:48.177373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.753 [2024-11-19 10:55:48.177614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.753 [2024-11-19 10:55:48.177828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.753 [2024-11-19 10:55:48.177846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.753 [2024-11-19 10:55:48.177858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.753 [2024-11-19 10:55:48.177870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.753 [2024-11-19 10:55:48.190261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.753 [2024-11-19 10:55:48.190621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.753 [2024-11-19 10:55:48.190649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.753 [2024-11-19 10:55:48.190665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.753 [2024-11-19 10:55:48.190902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.753 [2024-11-19 10:55:48.191117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.753 [2024-11-19 10:55:48.191135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.753 [2024-11-19 10:55:48.191147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.753 [2024-11-19 10:55:48.191159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.753 [2024-11-19 10:55:48.203564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.753 [2024-11-19 10:55:48.203986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.753 [2024-11-19 10:55:48.204014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.753 [2024-11-19 10:55:48.204029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.753 [2024-11-19 10:55:48.204258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.753 [2024-11-19 10:55:48.204505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.753 [2024-11-19 10:55:48.204526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.753 [2024-11-19 10:55:48.204539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.753 [2024-11-19 10:55:48.204551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.753 [2024-11-19 10:55:48.216798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.753 [2024-11-19 10:55:48.217245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.753 [2024-11-19 10:55:48.217272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.753 [2024-11-19 10:55:48.217288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.753 [2024-11-19 10:55:48.217523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.753 [2024-11-19 10:55:48.217757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.753 [2024-11-19 10:55:48.217776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.753 [2024-11-19 10:55:48.217789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.753 [2024-11-19 10:55:48.217800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.753 [2024-11-19 10:55:48.230019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.753 [2024-11-19 10:55:48.230432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.753 [2024-11-19 10:55:48.230461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.753 [2024-11-19 10:55:48.230477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.753 [2024-11-19 10:55:48.230722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.753 [2024-11-19 10:55:48.230920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.753 [2024-11-19 10:55:48.230944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.753 [2024-11-19 10:55:48.230957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.753 [2024-11-19 10:55:48.230968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.753 [2024-11-19 10:55:48.243291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.753 [2024-11-19 10:55:48.243668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.753 [2024-11-19 10:55:48.243697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.754 [2024-11-19 10:55:48.243712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.754 [2024-11-19 10:55:48.243940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.754 [2024-11-19 10:55:48.244154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.754 [2024-11-19 10:55:48.244173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.754 [2024-11-19 10:55:48.244185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.754 [2024-11-19 10:55:48.244196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.754 [2024-11-19 10:55:48.256989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.754 [2024-11-19 10:55:48.257355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.754 [2024-11-19 10:55:48.257384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.754 [2024-11-19 10:55:48.257399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.754 [2024-11-19 10:55:48.257627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.754 [2024-11-19 10:55:48.257859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.754 [2024-11-19 10:55:48.257880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.754 [2024-11-19 10:55:48.257893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.754 [2024-11-19 10:55:48.257906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.754 [2024-11-19 10:55:48.270413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.754 [2024-11-19 10:55:48.270794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.754 [2024-11-19 10:55:48.270822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.754 [2024-11-19 10:55:48.270837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.754 [2024-11-19 10:55:48.271063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.754 [2024-11-19 10:55:48.271276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.754 [2024-11-19 10:55:48.271319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.754 [2024-11-19 10:55:48.271333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.754 [2024-11-19 10:55:48.271361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.754 [2024-11-19 10:55:48.283755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.754 [2024-11-19 10:55:48.284082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.754 [2024-11-19 10:55:48.284109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.754 [2024-11-19 10:55:48.284124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.754 [2024-11-19 10:55:48.284377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.754 [2024-11-19 10:55:48.284589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.754 [2024-11-19 10:55:48.284609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.754 [2024-11-19 10:55:48.284622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.754 [2024-11-19 10:55:48.284650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.754 [2024-11-19 10:55:48.297208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.754 [2024-11-19 10:55:48.297575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.754 [2024-11-19 10:55:48.297603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.754 [2024-11-19 10:55:48.297618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.754 [2024-11-19 10:55:48.297831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.754 [2024-11-19 10:55:48.298092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.754 [2024-11-19 10:55:48.298112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.754 [2024-11-19 10:55:48.298124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.754 [2024-11-19 10:55:48.298136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.754 [2024-11-19 10:55:48.310459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.754 [2024-11-19 10:55:48.310848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.754 [2024-11-19 10:55:48.310876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.754 [2024-11-19 10:55:48.310892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.754 [2024-11-19 10:55:48.311133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.754 [2024-11-19 10:55:48.311373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.754 [2024-11-19 10:55:48.311393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.754 [2024-11-19 10:55:48.311405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.754 [2024-11-19 10:55:48.311417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.754 [2024-11-19 10:55:48.323754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.754 [2024-11-19 10:55:48.324142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.754 [2024-11-19 10:55:48.324189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.754 [2024-11-19 10:55:48.324206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.754 [2024-11-19 10:55:48.324459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.754 [2024-11-19 10:55:48.324678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.754 [2024-11-19 10:55:48.324697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.754 [2024-11-19 10:55:48.324709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.754 [2024-11-19 10:55:48.324720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.754 [2024-11-19 10:55:48.336984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.754 [2024-11-19 10:55:48.337353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.754 [2024-11-19 10:55:48.337382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.754 [2024-11-19 10:55:48.337398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.754 [2024-11-19 10:55:48.337626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.754 [2024-11-19 10:55:48.337843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.754 [2024-11-19 10:55:48.337862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.754 [2024-11-19 10:55:48.337874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.754 [2024-11-19 10:55:48.337885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.754 [2024-11-19 10:55:48.350333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.754 [2024-11-19 10:55:48.350690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.754 [2024-11-19 10:55:48.350719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.754 [2024-11-19 10:55:48.350734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.755 [2024-11-19 10:55:48.350963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.755 [2024-11-19 10:55:48.351177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.755 [2024-11-19 10:55:48.351195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.755 [2024-11-19 10:55:48.351207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.755 [2024-11-19 10:55:48.351219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:00.755 [2024-11-19 10:55:48.363650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:00.755 [2024-11-19 10:55:48.364022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.755 [2024-11-19 10:55:48.364064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:00.755 [2024-11-19 10:55:48.364080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:00.755 [2024-11-19 10:55:48.364367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:00.755 [2024-11-19 10:55:48.364579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:00.755 [2024-11-19 10:55:48.364599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:00.755 [2024-11-19 10:55:48.364611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:00.755 [2024-11-19 10:55:48.364624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.014 [2024-11-19 10:55:48.376972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.014 [2024-11-19 10:55:48.377374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.014 [2024-11-19 10:55:48.377403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.014 [2024-11-19 10:55:48.377419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.014 [2024-11-19 10:55:48.377647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.014 [2024-11-19 10:55:48.377878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.014 [2024-11-19 10:55:48.377897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.014 [2024-11-19 10:55:48.377925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.014 [2024-11-19 10:55:48.377938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.014 [2024-11-19 10:55:48.390180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.015 [2024-11-19 10:55:48.390707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.015 [2024-11-19 10:55:48.390736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.015 [2024-11-19 10:55:48.390751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.015 [2024-11-19 10:55:48.391003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.015 [2024-11-19 10:55:48.391201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.015 [2024-11-19 10:55:48.391220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.015 [2024-11-19 10:55:48.391232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.015 [2024-11-19 10:55:48.391244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.015 [2024-11-19 10:55:48.403511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.015 [2024-11-19 10:55:48.403847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.015 [2024-11-19 10:55:48.403888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.015 [2024-11-19 10:55:48.403903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.015 [2024-11-19 10:55:48.404123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.015 [2024-11-19 10:55:48.404370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.015 [2024-11-19 10:55:48.404397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.015 [2024-11-19 10:55:48.404412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.015 [2024-11-19 10:55:48.404425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.015 [2024-11-19 10:55:48.416910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.015 [2024-11-19 10:55:48.417258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.015 [2024-11-19 10:55:48.417285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.015 [2024-11-19 10:55:48.417300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.015 [2024-11-19 10:55:48.417537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.015 [2024-11-19 10:55:48.417757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.015 [2024-11-19 10:55:48.417776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.015 [2024-11-19 10:55:48.417789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.015 [2024-11-19 10:55:48.417800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.015 [2024-11-19 10:55:48.430373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.015 [2024-11-19 10:55:48.430761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.015 [2024-11-19 10:55:48.430788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.015 [2024-11-19 10:55:48.430804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.015 [2024-11-19 10:55:48.431031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.015 [2024-11-19 10:55:48.431251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.015 [2024-11-19 10:55:48.431270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.015 [2024-11-19 10:55:48.431282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.015 [2024-11-19 10:55:48.431294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.015 [2024-11-19 10:55:48.443660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.015 [2024-11-19 10:55:48.444093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.015 [2024-11-19 10:55:48.444121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.015 [2024-11-19 10:55:48.444142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.015 [2024-11-19 10:55:48.444379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.015 [2024-11-19 10:55:48.444608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.015 [2024-11-19 10:55:48.444627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.015 [2024-11-19 10:55:48.444639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.015 [2024-11-19 10:55:48.444665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.015 [2024-11-19 10:55:48.456948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.015 [2024-11-19 10:55:48.457319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.015 [2024-11-19 10:55:48.457363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.015 [2024-11-19 10:55:48.457378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.015 [2024-11-19 10:55:48.457631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.015 [2024-11-19 10:55:48.457845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.015 [2024-11-19 10:55:48.457864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.015 [2024-11-19 10:55:48.457876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.015 [2024-11-19 10:55:48.457887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.015 [2024-11-19 10:55:48.470196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.015 [2024-11-19 10:55:48.470535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.015 [2024-11-19 10:55:48.470579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.015 [2024-11-19 10:55:48.470595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.015 [2024-11-19 10:55:48.470832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.015 [2024-11-19 10:55:48.471045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.015 [2024-11-19 10:55:48.471063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.015 [2024-11-19 10:55:48.471076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.015 [2024-11-19 10:55:48.471088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.015 [2024-11-19 10:55:48.483479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.015 [2024-11-19 10:55:48.483884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.015 [2024-11-19 10:55:48.483912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.015 [2024-11-19 10:55:48.483927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.015 [2024-11-19 10:55:48.484156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.015 [2024-11-19 10:55:48.484403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.015 [2024-11-19 10:55:48.484424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.015 [2024-11-19 10:55:48.484437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.015 [2024-11-19 10:55:48.484449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.016 [2024-11-19 10:55:48.496958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.016 [2024-11-19 10:55:48.497293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.016 [2024-11-19 10:55:48.497346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.016 [2024-11-19 10:55:48.497363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.016 [2024-11-19 10:55:48.497576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.016 [2024-11-19 10:55:48.497798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.016 [2024-11-19 10:55:48.497818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.016 [2024-11-19 10:55:48.497830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.016 [2024-11-19 10:55:48.497842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.016 [2024-11-19 10:55:48.510457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.016 [2024-11-19 10:55:48.510825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.016 [2024-11-19 10:55:48.510854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.016 [2024-11-19 10:55:48.510870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.016 [2024-11-19 10:55:48.511097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.016 [2024-11-19 10:55:48.511335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.016 [2024-11-19 10:55:48.511357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.016 [2024-11-19 10:55:48.511371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.016 [2024-11-19 10:55:48.511384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.016 [2024-11-19 10:55:48.523863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.016 [2024-11-19 10:55:48.524235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.016 [2024-11-19 10:55:48.524263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.016 [2024-11-19 10:55:48.524278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.016 [2024-11-19 10:55:48.524500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.016 [2024-11-19 10:55:48.524749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.016 [2024-11-19 10:55:48.524768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.016 [2024-11-19 10:55:48.524781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.016 [2024-11-19 10:55:48.524792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.016 [2024-11-19 10:55:48.537129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.016 [2024-11-19 10:55:48.537492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.016 [2024-11-19 10:55:48.537521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.016 [2024-11-19 10:55:48.537536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.016 [2024-11-19 10:55:48.537793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.016 [2024-11-19 10:55:48.537991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.016 [2024-11-19 10:55:48.538009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.016 [2024-11-19 10:55:48.538022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.016 [2024-11-19 10:55:48.538033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.016 [2024-11-19 10:55:48.550471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.016 [2024-11-19 10:55:48.550863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.016 [2024-11-19 10:55:48.550892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.016 [2024-11-19 10:55:48.550908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.016 [2024-11-19 10:55:48.551150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.016 [2024-11-19 10:55:48.551395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.016 [2024-11-19 10:55:48.551416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.016 [2024-11-19 10:55:48.551429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.016 [2024-11-19 10:55:48.551442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.016 [2024-11-19 10:55:48.563819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.016 [2024-11-19 10:55:48.564131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.016 [2024-11-19 10:55:48.564157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.016 [2024-11-19 10:55:48.564171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.016 [2024-11-19 10:55:48.564414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.016 [2024-11-19 10:55:48.564633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.016 [2024-11-19 10:55:48.564651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.016 [2024-11-19 10:55:48.564664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.016 [2024-11-19 10:55:48.564676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.016 [2024-11-19 10:55:48.577140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.016 [2024-11-19 10:55:48.577574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.016 [2024-11-19 10:55:48.577603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.016 [2024-11-19 10:55:48.577618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.016 [2024-11-19 10:55:48.577846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.016 [2024-11-19 10:55:48.578059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.016 [2024-11-19 10:55:48.578083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.016 [2024-11-19 10:55:48.578096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.016 [2024-11-19 10:55:48.578107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.016 [2024-11-19 10:55:48.590378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.016 [2024-11-19 10:55:48.590736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.016 [2024-11-19 10:55:48.590764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.016 [2024-11-19 10:55:48.590794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.016 [2024-11-19 10:55:48.591045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.016 [2024-11-19 10:55:48.591244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.016 [2024-11-19 10:55:48.591262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.016 [2024-11-19 10:55:48.591274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.016 [2024-11-19 10:55:48.591285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.016 [2024-11-19 10:55:48.603705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.017 [2024-11-19 10:55:48.604018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.017 [2024-11-19 10:55:48.604060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.017 [2024-11-19 10:55:48.604075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.017 [2024-11-19 10:55:48.604281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.017 [2024-11-19 10:55:48.604525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.017 [2024-11-19 10:55:48.604545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.017 [2024-11-19 10:55:48.604558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.017 [2024-11-19 10:55:48.604571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.017 [2024-11-19 10:55:48.616990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.017 [2024-11-19 10:55:48.617414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.017 [2024-11-19 10:55:48.617443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.017 [2024-11-19 10:55:48.617459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.017 [2024-11-19 10:55:48.617686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.017 [2024-11-19 10:55:48.617899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.017 [2024-11-19 10:55:48.617917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.017 [2024-11-19 10:55:48.617929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.017 [2024-11-19 10:55:48.617941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.017 [2024-11-19 10:55:48.630338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.017 [2024-11-19 10:55:48.630744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.017 [2024-11-19 10:55:48.630786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.017 [2024-11-19 10:55:48.630802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.017 [2024-11-19 10:55:48.631015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.017 [2024-11-19 10:55:48.631232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.017 [2024-11-19 10:55:48.631252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.017 [2024-11-19 10:55:48.631279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.017 [2024-11-19 10:55:48.631292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.276 [2024-11-19 10:55:48.643972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.276 [2024-11-19 10:55:48.644371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.276 [2024-11-19 10:55:48.644399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.276 [2024-11-19 10:55:48.644415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.276 [2024-11-19 10:55:48.644643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.276 [2024-11-19 10:55:48.644856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.276 [2024-11-19 10:55:48.644875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.276 [2024-11-19 10:55:48.644887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.276 [2024-11-19 10:55:48.644898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.276 [2024-11-19 10:55:48.657140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.276 [2024-11-19 10:55:48.657533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.276 [2024-11-19 10:55:48.657561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.276 [2024-11-19 10:55:48.657576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.276 [2024-11-19 10:55:48.657816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.276 [2024-11-19 10:55:48.658030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.276 [2024-11-19 10:55:48.658048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.277 [2024-11-19 10:55:48.658061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.277 [2024-11-19 10:55:48.658072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.277 [2024-11-19 10:55:48.670335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.277 [2024-11-19 10:55:48.670771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.277 [2024-11-19 10:55:48.670803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.277 [2024-11-19 10:55:48.670819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.277 [2024-11-19 10:55:48.671065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.277 [2024-11-19 10:55:48.671263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.277 [2024-11-19 10:55:48.671281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.277 [2024-11-19 10:55:48.671318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.277 [2024-11-19 10:55:48.671332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.277 [2024-11-19 10:55:48.683589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.277 [2024-11-19 10:55:48.683951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.277 [2024-11-19 10:55:48.683979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.277 [2024-11-19 10:55:48.683995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.277 [2024-11-19 10:55:48.684226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.277 [2024-11-19 10:55:48.684494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.277 [2024-11-19 10:55:48.684516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.277 [2024-11-19 10:55:48.684529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.277 [2024-11-19 10:55:48.684542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.277 [2024-11-19 10:55:48.696822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.277 [2024-11-19 10:55:48.697192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.277 [2024-11-19 10:55:48.697220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.277 [2024-11-19 10:55:48.697235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.277 [2024-11-19 10:55:48.697474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.277 [2024-11-19 10:55:48.697713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.277 [2024-11-19 10:55:48.697731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.277 [2024-11-19 10:55:48.697743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.277 [2024-11-19 10:55:48.697754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.277 [2024-11-19 10:55:48.710147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.277 [2024-11-19 10:55:48.710520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.277 [2024-11-19 10:55:48.710548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.277 [2024-11-19 10:55:48.710563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.277 [2024-11-19 10:55:48.710796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.277 [2024-11-19 10:55:48.711010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.277 [2024-11-19 10:55:48.711029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.277 [2024-11-19 10:55:48.711041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.277 [2024-11-19 10:55:48.711053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.277 [2024-11-19 10:55:48.723387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.277 [2024-11-19 10:55:48.723814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.277 [2024-11-19 10:55:48.723855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.277 [2024-11-19 10:55:48.723870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.277 [2024-11-19 10:55:48.724096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.277 [2024-11-19 10:55:48.724336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.277 [2024-11-19 10:55:48.724373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.277 [2024-11-19 10:55:48.724387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.277 [2024-11-19 10:55:48.724399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.277 [2024-11-19 10:55:48.736712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.277 [2024-11-19 10:55:48.737111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.277 [2024-11-19 10:55:48.737153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.277 [2024-11-19 10:55:48.737168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.277 [2024-11-19 10:55:48.737435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.277 [2024-11-19 10:55:48.737659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.277 [2024-11-19 10:55:48.737678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.277 [2024-11-19 10:55:48.737690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.277 [2024-11-19 10:55:48.737701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.277 [2024-11-19 10:55:48.750023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.277 [2024-11-19 10:55:48.750397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.277 [2024-11-19 10:55:48.750425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.277 [2024-11-19 10:55:48.750441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.277 [2024-11-19 10:55:48.750682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.277 [2024-11-19 10:55:48.750880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.277 [2024-11-19 10:55:48.750903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.277 [2024-11-19 10:55:48.750916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.277 [2024-11-19 10:55:48.750927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.278 [2024-11-19 10:55:48.763381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.278 [2024-11-19 10:55:48.763765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.278 [2024-11-19 10:55:48.763793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.278 [2024-11-19 10:55:48.763808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.278 [2024-11-19 10:55:48.764036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.278 [2024-11-19 10:55:48.764292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.278 [2024-11-19 10:55:48.764323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.278 [2024-11-19 10:55:48.764338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.278 [2024-11-19 10:55:48.764350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.278 [2024-11-19 10:55:48.776874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.278 [2024-11-19 10:55:48.777282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.278 [2024-11-19 10:55:48.777315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.278 [2024-11-19 10:55:48.777347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.278 [2024-11-19 10:55:48.777553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.278 [2024-11-19 10:55:48.777766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.278 [2024-11-19 10:55:48.777785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.278 [2024-11-19 10:55:48.777797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.278 [2024-11-19 10:55:48.777809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.278 [2024-11-19 10:55:48.790214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.278 [2024-11-19 10:55:48.790574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.278 [2024-11-19 10:55:48.790602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.278 [2024-11-19 10:55:48.790618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.278 [2024-11-19 10:55:48.790868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.278 [2024-11-19 10:55:48.791066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.278 [2024-11-19 10:55:48.791084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.278 [2024-11-19 10:55:48.791096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.278 [2024-11-19 10:55:48.791108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.278 [2024-11-19 10:55:48.803517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.278 [2024-11-19 10:55:48.803905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.278 [2024-11-19 10:55:48.803947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.278 [2024-11-19 10:55:48.803963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.278 [2024-11-19 10:55:48.804215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.278 [2024-11-19 10:55:48.804478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.278 [2024-11-19 10:55:48.804500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.278 [2024-11-19 10:55:48.804513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.278 [2024-11-19 10:55:48.804526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.278 [2024-11-19 10:55:48.816755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.278 [2024-11-19 10:55:48.817129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.278 [2024-11-19 10:55:48.817171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.278 [2024-11-19 10:55:48.817186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.278 [2024-11-19 10:55:48.817438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.278 [2024-11-19 10:55:48.817679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.278 [2024-11-19 10:55:48.817697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.278 [2024-11-19 10:55:48.817709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.278 [2024-11-19 10:55:48.817721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.278 [2024-11-19 10:55:48.829964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.278 [2024-11-19 10:55:48.830367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.278 [2024-11-19 10:55:48.830394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.278 [2024-11-19 10:55:48.830409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.278 [2024-11-19 10:55:48.830669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.278 [2024-11-19 10:55:48.830867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.279 [2024-11-19 10:55:48.830886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.279 [2024-11-19 10:55:48.830898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.279 [2024-11-19 10:55:48.830909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.279 [2024-11-19 10:55:48.843144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.279 [2024-11-19 10:55:48.843544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.279 [2024-11-19 10:55:48.843595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.279 [2024-11-19 10:55:48.843612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.279 [2024-11-19 10:55:48.843843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.279 [2024-11-19 10:55:48.844041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.279 [2024-11-19 10:55:48.844060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.279 [2024-11-19 10:55:48.844072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.279 [2024-11-19 10:55:48.844084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.279 [2024-11-19 10:55:48.856451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.279 [2024-11-19 10:55:48.856842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.279 [2024-11-19 10:55:48.856884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.279 [2024-11-19 10:55:48.856899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.279 [2024-11-19 10:55:48.857146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.279 4480.40 IOPS, 17.50 MiB/s [2024-11-19T09:55:48.902Z] [2024-11-19 10:55:48.858834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.279 [2024-11-19 10:55:48.858850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.279 [2024-11-19 10:55:48.858876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.279 [2024-11-19 10:55:48.858888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.279 [2024-11-19 10:55:48.869635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.279 [2024-11-19 10:55:48.870050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.279 [2024-11-19 10:55:48.870076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.279 [2024-11-19 10:55:48.870106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.279 [2024-11-19 10:55:48.870351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.279 [2024-11-19 10:55:48.870570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.279 [2024-11-19 10:55:48.870604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.279 [2024-11-19 10:55:48.870616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.279 [2024-11-19 10:55:48.870628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.279 [2024-11-19 10:55:48.882733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.279 [2024-11-19 10:55:48.883095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.279 [2024-11-19 10:55:48.883122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.279 [2024-11-19 10:55:48.883137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.279 [2024-11-19 10:55:48.883384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.279 [2024-11-19 10:55:48.883604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.279 [2024-11-19 10:55:48.883623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.279 [2024-11-19 10:55:48.883636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.279 [2024-11-19 10:55:48.883649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.279 [2024-11-19 10:55:48.896297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.279 [2024-11-19 10:55:48.896741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.279 [2024-11-19 10:55:48.896783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.279 [2024-11-19 10:55:48.896798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.279 [2024-11-19 10:55:48.897040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.539 [2024-11-19 10:55:48.897328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.539 [2024-11-19 10:55:48.897350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.539 [2024-11-19 10:55:48.897363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.539 [2024-11-19 10:55:48.897376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.539 [2024-11-19 10:55:48.909445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.539 [2024-11-19 10:55:48.909870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-11-19 10:55:48.909897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.539 [2024-11-19 10:55:48.909912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.539 [2024-11-19 10:55:48.910147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.539 [2024-11-19 10:55:48.910382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.539 [2024-11-19 10:55:48.910402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.539 [2024-11-19 10:55:48.910415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.539 [2024-11-19 10:55:48.910427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.539 [2024-11-19 10:55:48.922551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.539 [2024-11-19 10:55:48.922913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-11-19 10:55:48.922940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.540 [2024-11-19 10:55:48.922970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.540 [2024-11-19 10:55:48.923224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.540 [2024-11-19 10:55:48.923446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.540 [2024-11-19 10:55:48.923470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.540 [2024-11-19 10:55:48.923483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.540 [2024-11-19 10:55:48.923495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.540 [2024-11-19 10:55:48.935678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.540 [2024-11-19 10:55:48.936053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-11-19 10:55:48.936103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.540 [2024-11-19 10:55:48.936118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.540 [2024-11-19 10:55:48.936382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.540 [2024-11-19 10:55:48.936587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.540 [2024-11-19 10:55:48.936605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.540 [2024-11-19 10:55:48.936618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.540 [2024-11-19 10:55:48.936643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.540 [2024-11-19 10:55:48.948913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.540 [2024-11-19 10:55:48.949414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-11-19 10:55:48.949445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.540 [2024-11-19 10:55:48.949477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.540 [2024-11-19 10:55:48.949741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.540 [2024-11-19 10:55:48.949934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.540 [2024-11-19 10:55:48.949952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.540 [2024-11-19 10:55:48.949964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.540 [2024-11-19 10:55:48.949975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.540 [2024-11-19 10:55:48.962048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.540 [2024-11-19 10:55:48.962414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-11-19 10:55:48.962456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.540 [2024-11-19 10:55:48.962472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.540 [2024-11-19 10:55:48.962725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.540 [2024-11-19 10:55:48.962917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.540 [2024-11-19 10:55:48.962935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.540 [2024-11-19 10:55:48.962946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.540 [2024-11-19 10:55:48.962958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.540 [2024-11-19 10:55:48.975245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.540 [2024-11-19 10:55:48.975685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-11-19 10:55:48.975726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.540 [2024-11-19 10:55:48.975742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.540 [2024-11-19 10:55:48.975982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.540 [2024-11-19 10:55:48.976175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.540 [2024-11-19 10:55:48.976193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.540 [2024-11-19 10:55:48.976204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.540 [2024-11-19 10:55:48.976215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.540 [2024-11-19 10:55:48.988347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.540 [2024-11-19 10:55:48.988704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-11-19 10:55:48.988731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.540 [2024-11-19 10:55:48.988746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.540 [2024-11-19 10:55:48.988959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.540 [2024-11-19 10:55:48.989167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.540 [2024-11-19 10:55:48.989185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.540 [2024-11-19 10:55:48.989196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.540 [2024-11-19 10:55:48.989207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.540 [2024-11-19 10:55:49.001439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.540 [2024-11-19 10:55:49.001823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-11-19 10:55:49.001863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.540 [2024-11-19 10:55:49.001877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.540 [2024-11-19 10:55:49.002105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.540 [2024-11-19 10:55:49.002339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.540 [2024-11-19 10:55:49.002373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.540 [2024-11-19 10:55:49.002386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.540 [2024-11-19 10:55:49.002398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.540 [2024-11-19 10:55:49.014724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.540 [2024-11-19 10:55:49.015157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-11-19 10:55:49.015204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.540 [2024-11-19 10:55:49.015220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.540 [2024-11-19 10:55:49.015459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.540 [2024-11-19 10:55:49.015710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.540 [2024-11-19 10:55:49.015730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.540 [2024-11-19 10:55:49.015742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.540 [2024-11-19 10:55:49.015754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.540 [2024-11-19 10:55:49.027996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.540 [2024-11-19 10:55:49.028335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-11-19 10:55:49.028362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.540 [2024-11-19 10:55:49.028377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.541 [2024-11-19 10:55:49.028598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.541 [2024-11-19 10:55:49.028807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.541 [2024-11-19 10:55:49.028825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.541 [2024-11-19 10:55:49.028837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.541 [2024-11-19 10:55:49.028848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.541 [2024-11-19 10:55:49.041260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.541 [2024-11-19 10:55:49.041657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-11-19 10:55:49.041700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.541 [2024-11-19 10:55:49.041716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.541 [2024-11-19 10:55:49.041960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.541 [2024-11-19 10:55:49.042152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.541 [2024-11-19 10:55:49.042170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.541 [2024-11-19 10:55:49.042182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.541 [2024-11-19 10:55:49.042193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.541 [2024-11-19 10:55:49.054332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.541 [2024-11-19 10:55:49.054706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-11-19 10:55:49.054748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.541 [2024-11-19 10:55:49.054763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.541 [2024-11-19 10:55:49.055017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.541 [2024-11-19 10:55:49.055224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.541 [2024-11-19 10:55:49.055242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.541 [2024-11-19 10:55:49.055253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.541 [2024-11-19 10:55:49.055264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.541 [2024-11-19 10:55:49.067386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.541 [2024-11-19 10:55:49.067750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-11-19 10:55:49.067777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.541 [2024-11-19 10:55:49.067792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.541 [2024-11-19 10:55:49.068026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.541 [2024-11-19 10:55:49.068235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.541 [2024-11-19 10:55:49.068253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.541 [2024-11-19 10:55:49.068265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.541 [2024-11-19 10:55:49.068276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.541 [2024-11-19 10:55:49.080441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.541 [2024-11-19 10:55:49.080807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-11-19 10:55:49.080849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.541 [2024-11-19 10:55:49.080864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.541 [2024-11-19 10:55:49.081114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.541 [2024-11-19 10:55:49.081363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.541 [2024-11-19 10:55:49.081384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.541 [2024-11-19 10:55:49.081397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.541 [2024-11-19 10:55:49.081410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.541 [2024-11-19 10:55:49.093557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.541 [2024-11-19 10:55:49.093921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-11-19 10:55:49.093964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.541 [2024-11-19 10:55:49.093979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.541 [2024-11-19 10:55:49.094231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.541 [2024-11-19 10:55:49.094468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.541 [2024-11-19 10:55:49.094494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.541 [2024-11-19 10:55:49.094507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.541 [2024-11-19 10:55:49.094518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.541 [2024-11-19 10:55:49.106641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.541 [2024-11-19 10:55:49.107002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-11-19 10:55:49.107045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.541 [2024-11-19 10:55:49.107060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.541 [2024-11-19 10:55:49.107321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.541 [2024-11-19 10:55:49.107519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.541 [2024-11-19 10:55:49.107538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.541 [2024-11-19 10:55:49.107550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.541 [2024-11-19 10:55:49.107561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.541 [2024-11-19 10:55:49.119644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.541 [2024-11-19 10:55:49.120133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-11-19 10:55:49.120175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.541 [2024-11-19 10:55:49.120191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.541 [2024-11-19 10:55:49.120440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.541 [2024-11-19 10:55:49.120657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.541 [2024-11-19 10:55:49.120676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.541 [2024-11-19 10:55:49.120687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.541 [2024-11-19 10:55:49.120699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.541 [2024-11-19 10:55:49.132817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.541 [2024-11-19 10:55:49.133206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-11-19 10:55:49.133233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.541 [2024-11-19 10:55:49.133248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.541 [2024-11-19 10:55:49.133502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.542 [2024-11-19 10:55:49.133748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.542 [2024-11-19 10:55:49.133766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.542 [2024-11-19 10:55:49.133777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.542 [2024-11-19 10:55:49.133789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.542 [2024-11-19 10:55:49.145856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.542 [2024-11-19 10:55:49.146268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-11-19 10:55:49.146317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.542 [2024-11-19 10:55:49.146335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.542 [2024-11-19 10:55:49.146578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.542 [2024-11-19 10:55:49.146803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.542 [2024-11-19 10:55:49.146821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.542 [2024-11-19 10:55:49.146833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.542 [2024-11-19 10:55:49.146844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.542 [2024-11-19 10:55:49.159376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.542 [2024-11-19 10:55:49.159765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-11-19 10:55:49.159793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.542 [2024-11-19 10:55:49.159808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.542 [2024-11-19 10:55:49.160022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.800 [2024-11-19 10:55:49.160277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.800 [2024-11-19 10:55:49.160322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.800 [2024-11-19 10:55:49.160339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.800 [2024-11-19 10:55:49.160352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.800 [2024-11-19 10:55:49.172468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.800 [2024-11-19 10:55:49.172876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-11-19 10:55:49.172917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.800 [2024-11-19 10:55:49.172933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.800 [2024-11-19 10:55:49.173168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.800 [2024-11-19 10:55:49.173387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.801 [2024-11-19 10:55:49.173407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.801 [2024-11-19 10:55:49.173419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.801 [2024-11-19 10:55:49.173431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.801 [2024-11-19 10:55:49.185523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.801 [2024-11-19 10:55:49.185884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-11-19 10:55:49.185930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.801 [2024-11-19 10:55:49.185946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.801 [2024-11-19 10:55:49.186193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.801 [2024-11-19 10:55:49.186437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.801 [2024-11-19 10:55:49.186458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.801 [2024-11-19 10:55:49.186470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.801 [2024-11-19 10:55:49.186482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.801 [2024-11-19 10:55:49.198549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.801 [2024-11-19 10:55:49.198922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-11-19 10:55:49.198963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.801 [2024-11-19 10:55:49.198977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.801 [2024-11-19 10:55:49.199219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.801 [2024-11-19 10:55:49.199441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.801 [2024-11-19 10:55:49.199461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.801 [2024-11-19 10:55:49.199472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.801 [2024-11-19 10:55:49.199484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.801 [2024-11-19 10:55:49.211648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.801 [2024-11-19 10:55:49.212057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-11-19 10:55:49.212098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.801 [2024-11-19 10:55:49.212113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.801 [2024-11-19 10:55:49.212363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.801 [2024-11-19 10:55:49.212561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.801 [2024-11-19 10:55:49.212580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.801 [2024-11-19 10:55:49.212592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.801 [2024-11-19 10:55:49.212603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.801 [2024-11-19 10:55:49.224766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.801 [2024-11-19 10:55:49.225240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-11-19 10:55:49.225292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.801 [2024-11-19 10:55:49.225316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.801 [2024-11-19 10:55:49.225580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.801 [2024-11-19 10:55:49.225803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.801 [2024-11-19 10:55:49.225821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.801 [2024-11-19 10:55:49.225833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.801 [2024-11-19 10:55:49.225844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.801 [2024-11-19 10:55:49.237757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.801 [2024-11-19 10:55:49.238157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-11-19 10:55:49.238223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.801 [2024-11-19 10:55:49.238238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.801 [2024-11-19 10:55:49.238515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.801 [2024-11-19 10:55:49.238745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.801 [2024-11-19 10:55:49.238763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.801 [2024-11-19 10:55:49.238775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.801 [2024-11-19 10:55:49.238786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.801 [2024-11-19 10:55:49.250925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.801 [2024-11-19 10:55:49.251267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-11-19 10:55:49.251343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.801 [2024-11-19 10:55:49.251359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.801 [2024-11-19 10:55:49.251593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.801 [2024-11-19 10:55:49.251800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.801 [2024-11-19 10:55:49.251818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.801 [2024-11-19 10:55:49.251829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.801 [2024-11-19 10:55:49.251840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.801 [2024-11-19 10:55:49.264101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.801 [2024-11-19 10:55:49.264486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-11-19 10:55:49.264514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.801 [2024-11-19 10:55:49.264530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.801 [2024-11-19 10:55:49.264781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.801 [2024-11-19 10:55:49.265024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.801 [2024-11-19 10:55:49.265047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.801 [2024-11-19 10:55:49.265060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.801 [2024-11-19 10:55:49.265072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.802 [2024-11-19 10:55:49.277402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.802 [2024-11-19 10:55:49.277889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-11-19 10:55:49.277931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.802 [2024-11-19 10:55:49.277946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.802 [2024-11-19 10:55:49.278197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.802 [2024-11-19 10:55:49.278431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.802 [2024-11-19 10:55:49.278451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.802 [2024-11-19 10:55:49.278464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.802 [2024-11-19 10:55:49.278475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.802 [2024-11-19 10:55:49.290424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.802 [2024-11-19 10:55:49.290788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-11-19 10:55:49.290830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.802 [2024-11-19 10:55:49.290845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.802 [2024-11-19 10:55:49.291098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.802 [2024-11-19 10:55:49.291328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.802 [2024-11-19 10:55:49.291347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.802 [2024-11-19 10:55:49.291359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.802 [2024-11-19 10:55:49.291371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.802 [2024-11-19 10:55:49.303537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.802 [2024-11-19 10:55:49.303899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-11-19 10:55:49.303941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.802 [2024-11-19 10:55:49.303957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.802 [2024-11-19 10:55:49.304208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.802 [2024-11-19 10:55:49.304443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.802 [2024-11-19 10:55:49.304463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.802 [2024-11-19 10:55:49.304475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.802 [2024-11-19 10:55:49.304486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.802 [2024-11-19 10:55:49.316607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.802 [2024-11-19 10:55:49.317016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-11-19 10:55:49.317070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.802 [2024-11-19 10:55:49.317084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.802 [2024-11-19 10:55:49.317334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.802 [2024-11-19 10:55:49.317548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.802 [2024-11-19 10:55:49.317566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.802 [2024-11-19 10:55:49.317578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.802 [2024-11-19 10:55:49.317590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.802 [2024-11-19 10:55:49.329739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.802 [2024-11-19 10:55:49.330070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-11-19 10:55:49.330097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.802 [2024-11-19 10:55:49.330112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.802 [2024-11-19 10:55:49.330342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.802 [2024-11-19 10:55:49.330561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.802 [2024-11-19 10:55:49.330581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.802 [2024-11-19 10:55:49.330593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.802 [2024-11-19 10:55:49.330605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.802 [2024-11-19 10:55:49.342918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.802 [2024-11-19 10:55:49.343279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-11-19 10:55:49.343312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.802 [2024-11-19 10:55:49.343329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.802 [2024-11-19 10:55:49.343563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.802 [2024-11-19 10:55:49.343771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.802 [2024-11-19 10:55:49.343789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.802 [2024-11-19 10:55:49.343801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.802 [2024-11-19 10:55:49.343812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.802 [2024-11-19 10:55:49.355995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.802 [2024-11-19 10:55:49.356358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-11-19 10:55:49.356400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.802 [2024-11-19 10:55:49.356417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.802 [2024-11-19 10:55:49.356645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.802 [2024-11-19 10:55:49.356837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.802 [2024-11-19 10:55:49.356855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.802 [2024-11-19 10:55:49.356867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.803 [2024-11-19 10:55:49.356878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.803 [2024-11-19 10:55:49.369023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.803 [2024-11-19 10:55:49.369388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-11-19 10:55:49.369430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.803 [2024-11-19 10:55:49.369445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.803 [2024-11-19 10:55:49.369691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.803 [2024-11-19 10:55:49.369899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.803 [2024-11-19 10:55:49.369916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.803 [2024-11-19 10:55:49.369928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.803 [2024-11-19 10:55:49.369939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.803 [2024-11-19 10:55:49.382125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.803 [2024-11-19 10:55:49.382556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-11-19 10:55:49.382583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.803 [2024-11-19 10:55:49.382598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.803 [2024-11-19 10:55:49.382817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.803 [2024-11-19 10:55:49.383023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.803 [2024-11-19 10:55:49.383041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.803 [2024-11-19 10:55:49.383052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.803 [2024-11-19 10:55:49.383063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.803 [2024-11-19 10:55:49.395134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.803 [2024-11-19 10:55:49.395503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-11-19 10:55:49.395529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.803 [2024-11-19 10:55:49.395544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.803 [2024-11-19 10:55:49.395763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.803 [2024-11-19 10:55:49.395971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.803 [2024-11-19 10:55:49.395989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.803 [2024-11-19 10:55:49.396000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.803 [2024-11-19 10:55:49.396011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.803 [2024-11-19 10:55:49.408223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.803 [2024-11-19 10:55:49.408615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-11-19 10:55:49.408657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:01.803 [2024-11-19 10:55:49.408673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:01.803 [2024-11-19 10:55:49.408905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:01.803 [2024-11-19 10:55:49.409112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.803 [2024-11-19 10:55:49.409130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.803 [2024-11-19 10:55:49.409144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.803 [2024-11-19 10:55:49.409155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.062 [2024-11-19 10:55:49.421872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.062 [2024-11-19 10:55:49.422235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.062 [2024-11-19 10:55:49.422262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.062 [2024-11-19 10:55:49.422278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.062 [2024-11-19 10:55:49.422513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.062 [2024-11-19 10:55:49.422749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.062 [2024-11-19 10:55:49.422768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.062 [2024-11-19 10:55:49.422780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.062 [2024-11-19 10:55:49.422792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.062 [2024-11-19 10:55:49.435067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.062 [2024-11-19 10:55:49.435457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.062 [2024-11-19 10:55:49.435499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.062 [2024-11-19 10:55:49.435515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.062 [2024-11-19 10:55:49.435779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.062 [2024-11-19 10:55:49.435991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.062 [2024-11-19 10:55:49.436014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.062 [2024-11-19 10:55:49.436027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.062 [2024-11-19 10:55:49.436039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.063 [2024-11-19 10:55:49.448617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.063 [2024-11-19 10:55:49.449078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.063 [2024-11-19 10:55:49.449132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.063 [2024-11-19 10:55:49.449146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.063 [2024-11-19 10:55:49.449414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.063 [2024-11-19 10:55:49.449613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.063 [2024-11-19 10:55:49.449646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.063 [2024-11-19 10:55:49.449658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.063 [2024-11-19 10:55:49.449669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.063 [2024-11-19 10:55:49.461874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.063 [2024-11-19 10:55:49.462282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.063 [2024-11-19 10:55:49.462332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.063 [2024-11-19 10:55:49.462349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.063 [2024-11-19 10:55:49.462602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.063 [2024-11-19 10:55:49.462812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.063 [2024-11-19 10:55:49.462830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.063 [2024-11-19 10:55:49.462842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.063 [2024-11-19 10:55:49.462853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.063 [2024-11-19 10:55:49.475182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.063 [2024-11-19 10:55:49.475572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.063 [2024-11-19 10:55:49.475617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.063 [2024-11-19 10:55:49.475632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.063 [2024-11-19 10:55:49.475879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.063 [2024-11-19 10:55:49.476087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.063 [2024-11-19 10:55:49.476105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.063 [2024-11-19 10:55:49.476117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.063 [2024-11-19 10:55:49.476128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1452999 Killed "${NVMF_APP[@]}" "$@" 00:28:02.063 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:02.063 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:02.063 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:02.063 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:02.063 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.063 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1453999 00:28:02.063 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:02.063 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1453999 00:28:02.063 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1453999 ']' 00:28:02.063 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.063 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:02.063 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.063 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:02.063 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.063 [2024-11-19 10:55:49.488744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.063 [2024-11-19 10:55:49.489157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.063 [2024-11-19 10:55:49.489190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.063 [2024-11-19 10:55:49.489221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.063 [2024-11-19 10:55:49.489444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.063 [2024-11-19 10:55:49.489672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.063 [2024-11-19 10:55:49.489690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.063 [2024-11-19 10:55:49.489703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.063 [2024-11-19 10:55:49.489715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.063 [2024-11-19 10:55:49.502222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.063 [2024-11-19 10:55:49.502569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.063 [2024-11-19 10:55:49.502597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.063 [2024-11-19 10:55:49.502613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.063 [2024-11-19 10:55:49.502842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.063 [2024-11-19 10:55:49.503062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.063 [2024-11-19 10:55:49.503081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.063 [2024-11-19 10:55:49.503099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.063 [2024-11-19 10:55:49.503111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.063 [2024-11-19 10:55:49.515784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.063 [2024-11-19 10:55:49.516187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.063 [2024-11-19 10:55:49.516215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.063 [2024-11-19 10:55:49.516231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.063 [2024-11-19 10:55:49.516455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.063 [2024-11-19 10:55:49.516704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.063 [2024-11-19 10:55:49.516725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.063 [2024-11-19 10:55:49.516738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.063 [2024-11-19 10:55:49.516751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.063 [2024-11-19 10:55:49.529391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.063 [2024-11-19 10:55:49.529776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.063 [2024-11-19 10:55:49.529820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.063 [2024-11-19 10:55:49.529837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.063 [2024-11-19 10:55:49.530063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.063 [2024-11-19 10:55:49.530332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.064 [2024-11-19 10:55:49.530354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.064 [2024-11-19 10:55:49.530368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.064 [2024-11-19 10:55:49.530381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.064 [2024-11-19 10:55:49.532670] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:28:02.064 [2024-11-19 10:55:49.532745] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:02.064 [2024-11-19 10:55:49.542923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.064 [2024-11-19 10:55:49.543270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.064 [2024-11-19 10:55:49.543471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.064 [2024-11-19 10:55:49.543494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.064 [2024-11-19 10:55:49.543749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.064 [2024-11-19 10:55:49.543947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.064 [2024-11-19 10:55:49.543966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.064 [2024-11-19 10:55:49.543984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.064 [2024-11-19 10:55:49.543997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.064 [2024-11-19 10:55:49.556465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.064 [2024-11-19 10:55:49.556858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.064 [2024-11-19 10:55:49.556886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.064 [2024-11-19 10:55:49.556902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.064 [2024-11-19 10:55:49.557130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.064 [2024-11-19 10:55:49.557381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.064 [2024-11-19 10:55:49.557402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.064 [2024-11-19 10:55:49.557415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.064 [2024-11-19 10:55:49.557428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.064 [2024-11-19 10:55:49.569888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.064 [2024-11-19 10:55:49.570266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.064 [2024-11-19 10:55:49.570317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.064 [2024-11-19 10:55:49.570334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.064 [2024-11-19 10:55:49.570562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.064 [2024-11-19 10:55:49.570804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.064 [2024-11-19 10:55:49.570824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.064 [2024-11-19 10:55:49.570836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.064 [2024-11-19 10:55:49.570849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.064 [2024-11-19 10:55:49.583251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.064 [2024-11-19 10:55:49.583598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.064 [2024-11-19 10:55:49.583627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.064 [2024-11-19 10:55:49.583643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.064 [2024-11-19 10:55:49.583879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.064 [2024-11-19 10:55:49.584094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.064 [2024-11-19 10:55:49.584113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.064 [2024-11-19 10:55:49.584127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.064 [2024-11-19 10:55:49.584139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.064 [2024-11-19 10:55:49.596571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.064 [2024-11-19 10:55:49.596906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.064 [2024-11-19 10:55:49.596932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.064 [2024-11-19 10:55:49.596947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.064 [2024-11-19 10:55:49.597146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.064 [2024-11-19 10:55:49.597394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.064 [2024-11-19 10:55:49.597415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.064 [2024-11-19 10:55:49.597429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.064 [2024-11-19 10:55:49.597441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.064 [2024-11-19 10:55:49.609728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:02.064 [2024-11-19 10:55:49.609859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.064 [2024-11-19 10:55:49.610278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.064 [2024-11-19 10:55:49.610312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.064 [2024-11-19 10:55:49.610330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.064 [2024-11-19 10:55:49.610558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.064 [2024-11-19 10:55:49.610775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.064 [2024-11-19 10:55:49.610794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.064 [2024-11-19 10:55:49.610807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.064 [2024-11-19 10:55:49.610818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.064 [2024-11-19 10:55:49.623187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.064 [2024-11-19 10:55:49.623748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.064 [2024-11-19 10:55:49.623800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.065 [2024-11-19 10:55:49.623834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.065 [2024-11-19 10:55:49.624086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.065 [2024-11-19 10:55:49.624332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.065 [2024-11-19 10:55:49.624354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.065 [2024-11-19 10:55:49.624370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.065 [2024-11-19 10:55:49.624385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.065 [2024-11-19 10:55:49.636701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.065 [2024-11-19 10:55:49.637096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.065 [2024-11-19 10:55:49.637147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.065 [2024-11-19 10:55:49.637164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.065 [2024-11-19 10:55:49.637450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.065 [2024-11-19 10:55:49.637689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.065 [2024-11-19 10:55:49.637708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.065 [2024-11-19 10:55:49.637720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.065 [2024-11-19 10:55:49.637732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.065 [2024-11-19 10:55:49.650050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.065 [2024-11-19 10:55:49.650466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.065 [2024-11-19 10:55:49.650494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.065 [2024-11-19 10:55:49.650510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.065 [2024-11-19 10:55:49.650760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.065 [2024-11-19 10:55:49.650958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.065 [2024-11-19 10:55:49.650976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.065 [2024-11-19 10:55:49.650989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.065 [2024-11-19 10:55:49.651001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.065 [2024-11-19 10:55:49.663455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.065 [2024-11-19 10:55:49.663886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.065 [2024-11-19 10:55:49.663913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.065 [2024-11-19 10:55:49.663942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.065 [2024-11-19 10:55:49.664163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.065 [2024-11-19 10:55:49.664409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.065 [2024-11-19 10:55:49.664430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.065 [2024-11-19 10:55:49.664444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.065 [2024-11-19 10:55:49.664456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.065 [2024-11-19 10:55:49.668061] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:02.065 [2024-11-19 10:55:49.668092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:02.065 [2024-11-19 10:55:49.668121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:02.065 [2024-11-19 10:55:49.668133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:02.065 [2024-11-19 10:55:49.668143] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:02.065 [2024-11-19 10:55:49.669566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:02.065 [2024-11-19 10:55:49.669622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:02.065 [2024-11-19 10:55:49.669626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.065 [2024-11-19 10:55:49.677008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.065 [2024-11-19 10:55:49.677440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.065 [2024-11-19 10:55:49.677475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.065 [2024-11-19 10:55:49.677494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.065 [2024-11-19 10:55:49.677729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.065 [2024-11-19 10:55:49.677945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.065 [2024-11-19 10:55:49.677965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.065 [2024-11-19 10:55:49.677982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.065 [2024-11-19 10:55:49.677996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.325 [2024-11-19 10:55:49.690746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.325 [2024-11-19 10:55:49.691202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.325 [2024-11-19 10:55:49.691237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.325 [2024-11-19 10:55:49.691257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.325 [2024-11-19 10:55:49.691487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.325 [2024-11-19 10:55:49.691723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.325 [2024-11-19 10:55:49.691745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.325 [2024-11-19 10:55:49.691761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.325 [2024-11-19 10:55:49.691776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.325 [2024-11-19 10:55:49.704319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.325 [2024-11-19 10:55:49.704827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.325 [2024-11-19 10:55:49.704866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.325 [2024-11-19 10:55:49.704886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.325 [2024-11-19 10:55:49.705109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.325 [2024-11-19 10:55:49.705345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.325 [2024-11-19 10:55:49.705367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.325 [2024-11-19 10:55:49.705384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.325 [2024-11-19 10:55:49.705414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.325 [2024-11-19 10:55:49.717998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.325 [2024-11-19 10:55:49.718532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.325 [2024-11-19 10:55:49.718570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.325 [2024-11-19 10:55:49.718590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.325 [2024-11-19 10:55:49.718827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.325 [2024-11-19 10:55:49.719043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.325 [2024-11-19 10:55:49.719065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.325 [2024-11-19 10:55:49.719082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.325 [2024-11-19 10:55:49.719097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.325 [2024-11-19 10:55:49.731532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.325 [2024-11-19 10:55:49.731988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.325 [2024-11-19 10:55:49.732023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.325 [2024-11-19 10:55:49.732041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.325 [2024-11-19 10:55:49.732277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.325 [2024-11-19 10:55:49.732522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.326 [2024-11-19 10:55:49.732545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.326 [2024-11-19 10:55:49.732561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.326 [2024-11-19 10:55:49.732575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.326 [2024-11-19 10:55:49.745136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.326 [2024-11-19 10:55:49.745683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.326 [2024-11-19 10:55:49.745720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.326 [2024-11-19 10:55:49.745740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.326 [2024-11-19 10:55:49.745978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.326 [2024-11-19 10:55:49.746194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.326 [2024-11-19 10:55:49.746215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.326 [2024-11-19 10:55:49.746231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.326 [2024-11-19 10:55:49.746246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.326 [2024-11-19 10:55:49.758599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.326 [2024-11-19 10:55:49.758944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.326 [2024-11-19 10:55:49.758973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.326 [2024-11-19 10:55:49.758997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.326 [2024-11-19 10:55:49.759227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.326 [2024-11-19 10:55:49.759469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.326 [2024-11-19 10:55:49.759491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.326 [2024-11-19 10:55:49.759505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.326 [2024-11-19 10:55:49.759518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.326 [2024-11-19 10:55:49.772132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.326 [2024-11-19 10:55:49.772490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.326 [2024-11-19 10:55:49.772520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.326 [2024-11-19 10:55:49.772536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.326 [2024-11-19 10:55:49.772752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.326 [2024-11-19 10:55:49.772969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.326 [2024-11-19 10:55:49.772990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.326 [2024-11-19 10:55:49.773003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.326 [2024-11-19 10:55:49.773016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.326 [2024-11-19 10:55:49.785737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.326 [2024-11-19 10:55:49.786088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.326 [2024-11-19 10:55:49.786118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.326 [2024-11-19 10:55:49.786134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.326 [2024-11-19 10:55:49.786358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.326 [2024-11-19 10:55:49.786576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.326 [2024-11-19 10:55:49.786596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.326 [2024-11-19 10:55:49.786609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.326 [2024-11-19 10:55:49.786621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.326 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:02.326 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:02.326 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:02.326 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:02.326 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.326 [2024-11-19 10:55:49.799323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.326 [2024-11-19 10:55:49.799735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.326 [2024-11-19 10:55:49.799763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.326 [2024-11-19 10:55:49.799778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.326 [2024-11-19 10:55:49.799991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.326 [2024-11-19 10:55:49.800216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.326 [2024-11-19 10:55:49.800236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.326 [2024-11-19 10:55:49.800249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.326 [2024-11-19 10:55:49.800261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.326 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:02.326 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:02.326 [2024-11-19 10:55:49.812887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.326 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.326 [2024-11-19 10:55:49.813258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.326 [2024-11-19 10:55:49.813287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.326 [2024-11-19 10:55:49.813310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.326 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.326 [2024-11-19 10:55:49.813526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.326 [2024-11-19 10:55:49.813755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.326 [2024-11-19 10:55:49.813775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.326 [2024-11-19 10:55:49.813788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.326 [2024-11-19 10:55:49.813801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.326 [2024-11-19 10:55:49.816763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:02.326 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.326 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:02.326 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.326 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.326 [2024-11-19 10:55:49.826499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.327 [2024-11-19 10:55:49.826850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.327 [2024-11-19 10:55:49.826878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.327 [2024-11-19 10:55:49.826893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.327 [2024-11-19 10:55:49.827121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.327 [2024-11-19 10:55:49.827357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.327 [2024-11-19 10:55:49.827384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.327 [2024-11-19 10:55:49.827399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.327 [2024-11-19 10:55:49.827412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.327 [2024-11-19 10:55:49.840093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.327 [2024-11-19 10:55:49.840531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.327 [2024-11-19 10:55:49.840563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.327 [2024-11-19 10:55:49.840581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.327 [2024-11-19 10:55:49.840816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.327 [2024-11-19 10:55:49.841039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.327 [2024-11-19 10:55:49.841058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.327 [2024-11-19 10:55:49.841072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.327 [2024-11-19 10:55:49.841085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.327 [2024-11-19 10:55:49.853643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.327 [2024-11-19 10:55:49.854049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.327 [2024-11-19 10:55:49.854080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.327 [2024-11-19 10:55:49.854097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.327 [2024-11-19 10:55:49.854338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.327 [2024-11-19 10:55:49.854554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.327 [2024-11-19 10:55:49.854575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.327 [2024-11-19 10:55:49.854590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.327 [2024-11-19 10:55:49.854604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.327 Malloc0 00:28:02.327 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.327 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:02.327 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.327 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.327 3733.67 IOPS, 14.58 MiB/s [2024-11-19T09:55:49.950Z] 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.327 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:02.327 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.327 [2024-11-19 10:55:49.867344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.327 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.327 [2024-11-19 10:55:49.867670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.327 [2024-11-19 10:55:49.867707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cea40 with addr=10.0.0.2, port=4420 00:28:02.327 [2024-11-19 10:55:49.867724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cea40 is same with the state(6) to be set 00:28:02.327 [2024-11-19 10:55:49.867939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cea40 (9): Bad file descriptor 00:28:02.327 [2024-11-19 10:55:49.868157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.327 [2024-11-19 10:55:49.868177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.327 [2024-11-19 10:55:49.868191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.327 [2024-11-19 10:55:49.868205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.327 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.327 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:02.327 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.327 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.327 [2024-11-19 10:55:49.878947] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.327 [2024-11-19 10:55:49.880826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.327 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.327 10:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1453286 00:28:02.585 [2024-11-19 10:55:49.950166] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:04.452 4273.86 IOPS, 16.69 MiB/s [2024-11-19T09:55:53.009Z] 4790.25 IOPS, 18.71 MiB/s [2024-11-19T09:55:53.942Z] 5191.33 IOPS, 20.28 MiB/s [2024-11-19T09:55:55.314Z] 5498.60 IOPS, 21.48 MiB/s [2024-11-19T09:55:55.881Z] 5770.45 IOPS, 22.54 MiB/s [2024-11-19T09:55:57.254Z] 5999.25 IOPS, 23.43 MiB/s [2024-11-19T09:55:58.185Z] 6180.38 IOPS, 24.14 MiB/s [2024-11-19T09:55:59.143Z] 6329.57 IOPS, 24.72 MiB/s 00:28:11.520 Latency(us) 00:28:11.520 [2024-11-19T09:55:59.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.520 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:11.520 Verification LBA range: start 0x0 length 0x4000 00:28:11.520 Nvme1n1 : 15.00 6464.70 25.25 10196.34 0.00 7659.94 855.61 21359.88 00:28:11.520 [2024-11-19T09:55:59.143Z] =================================================================================================================== 00:28:11.520 [2024-11-19T09:55:59.143Z] Total : 6464.70 25.25 10196.34 0.00 7659.94 855.61 21359.88 00:28:11.520 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:11.520 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:11.520 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.520 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.520 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.520 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:11.520 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:11.520 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:11.520 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:11.798 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:11.798 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:11.798 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:11.798 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:11.798 rmmod nvme_tcp 00:28:11.798 rmmod nvme_fabrics 00:28:11.798 rmmod nvme_keyring 00:28:11.798 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:11.798 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:11.798 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:11.798 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1453999 ']' 00:28:11.798 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1453999 00:28:11.798 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1453999 ']' 00:28:11.798 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1453999 00:28:11.798 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:28:11.798 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:11.798 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1453999 00:28:11.798 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:11.798 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:11.798 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1453999' 00:28:11.798 killing process with pid 1453999 00:28:11.798 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1453999 00:28:11.798 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1453999 00:28:12.056 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:12.056 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:12.056 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:12.056 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:12.056 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:28:12.056 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:12.056 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:28:12.056 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:12.056 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:12.056 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.056 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.056 10:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.956 10:56:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:13.956 00:28:13.956 real 0m22.529s 00:28:13.956 user 1m0.502s 00:28:13.956 sys 0m4.073s 00:28:13.956 10:56:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:13.956 10:56:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:13.956 ************************************ 00:28:13.956 END TEST nvmf_bdevperf 00:28:13.957 ************************************ 00:28:13.957 10:56:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:13.957 10:56:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:13.957 10:56:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:13.957 10:56:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.215 ************************************ 00:28:14.215 START TEST nvmf_target_disconnect 00:28:14.215 ************************************ 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:14.215 * Looking for test storage... 00:28:14.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:14.215 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:14.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.216 --rc genhtml_branch_coverage=1 00:28:14.216 --rc genhtml_function_coverage=1 00:28:14.216 --rc genhtml_legend=1 00:28:14.216 --rc geninfo_all_blocks=1 00:28:14.216 --rc geninfo_unexecuted_blocks=1 00:28:14.216 00:28:14.216 ' 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:14.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.216 --rc genhtml_branch_coverage=1 00:28:14.216 --rc genhtml_function_coverage=1 00:28:14.216 --rc genhtml_legend=1 00:28:14.216 --rc geninfo_all_blocks=1 00:28:14.216 --rc geninfo_unexecuted_blocks=1 00:28:14.216 00:28:14.216 ' 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:14.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.216 --rc genhtml_branch_coverage=1 00:28:14.216 --rc genhtml_function_coverage=1 00:28:14.216 --rc genhtml_legend=1 00:28:14.216 --rc geninfo_all_blocks=1 00:28:14.216 --rc geninfo_unexecuted_blocks=1 00:28:14.216 00:28:14.216 ' 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:14.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.216 --rc genhtml_branch_coverage=1 00:28:14.216 --rc genhtml_function_coverage=1 00:28:14.216 --rc genhtml_legend=1 00:28:14.216 --rc geninfo_all_blocks=1 00:28:14.216 --rc geninfo_unexecuted_blocks=1 00:28:14.216 00:28:14.216 ' 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:14.216 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.217 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.217 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:14.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:14.217 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:14.217 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:14.217 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:14.217 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:14.217 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:14.217 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:14.217 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:14.217 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:14.217 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:14.217 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:14.217 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:14.217 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:14.217 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.217 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.217 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.217 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:14.217 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:14.217 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:14.217 10:56:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:16.749 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:16.749 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:16.749 Found net devices under 0000:09:00.0: cvl_0_0 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:16.749 Found net devices under 0000:09:00.1: cvl_0_1 00:28:16.749 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:16.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:16.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:28:16.750 00:28:16.750 --- 10.0.0.2 ping statistics --- 00:28:16.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.750 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:16.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:16.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:28:16.750 00:28:16.750 --- 10.0.0.1 ping statistics --- 00:28:16.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.750 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:16.750 ************************************ 00:28:16.750 START TEST nvmf_target_disconnect_tc1 00:28:16.750 ************************************ 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:16.750 10:56:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:16.750 [2024-11-19 10:56:04.069264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.750 [2024-11-19 10:56:04.069341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x908f40 with addr=10.0.0.2, port=4420 00:28:16.750 [2024-11-19 10:56:04.069384] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:16.750 [2024-11-19 10:56:04.069406] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:16.751 [2024-11-19 10:56:04.069426] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:28:16.751 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:16.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:16.751 Initializing NVMe Controllers 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:16.751 00:28:16.751 real 0m0.108s 00:28:16.751 user 0m0.051s 00:28:16.751 sys 0m0.054s 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:16.751 ************************************ 00:28:16.751 END TEST nvmf_target_disconnect_tc1 00:28:16.751 ************************************ 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:16.751 ************************************ 00:28:16.751 START TEST nvmf_target_disconnect_tc2 00:28:16.751 ************************************ 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1457114 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1457114 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1457114 ']' 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:16.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:16.751 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:16.751 [2024-11-19 10:56:04.192615] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:28:16.751 [2024-11-19 10:56:04.192716] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:16.751 [2024-11-19 10:56:04.263909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:16.751 [2024-11-19 10:56:04.325019] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:16.751 [2024-11-19 10:56:04.325063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:16.751 [2024-11-19 10:56:04.325092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:16.751 [2024-11-19 10:56:04.325104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:16.751 [2024-11-19 10:56:04.325114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:16.751 [2024-11-19 10:56:04.326759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:16.751 [2024-11-19 10:56:04.326826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:16.751 [2024-11-19 10:56:04.326905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:16.751 [2024-11-19 10:56:04.326910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:17.009 Malloc0 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:17.009 [2024-11-19 10:56:04.510341] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:17.009 [2024-11-19 10:56:04.538609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1457263 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:17.009 10:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:19.565 10:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1457114 00:28:19.565 10:56:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Write completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Write completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Write completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Write completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Write completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Write completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Write completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Write completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Write completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Write completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Write completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Write completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Write completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 [2024-11-19 10:56:06.564888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Write completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Read completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Write completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Write completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.565 Write completed with error (sct=0, sc=8) 00:28:19.565 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 [2024-11-19 10:56:06.565223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 [2024-11-19 10:56:06.565530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Read completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 Write completed with error (sct=0, sc=8) 00:28:19.566 starting I/O failed 00:28:19.566 [2024-11-19 10:56:06.565855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:19.566 [2024-11-19 10:56:06.566037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.566 [2024-11-19 10:56:06.566088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.566 qpair failed and we were unable to recover it. 00:28:19.566 [2024-11-19 10:56:06.566221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.566 [2024-11-19 10:56:06.566249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.566 qpair failed and we were unable to recover it. 00:28:19.566 [2024-11-19 10:56:06.566367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.566 [2024-11-19 10:56:06.566395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.566 qpair failed and we were unable to recover it. 00:28:19.566 [2024-11-19 10:56:06.566505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.566 [2024-11-19 10:56:06.566532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.566 qpair failed and we were unable to recover it. 00:28:19.566 [2024-11-19 10:56:06.566646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.566 [2024-11-19 10:56:06.566686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.566 qpair failed and we were unable to recover it. 00:28:19.566 [2024-11-19 10:56:06.566792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.566820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.566933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.566958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.567037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.567063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.567140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.567166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.567294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.567326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.567459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.567485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.567595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.567620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.567712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.567737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.567829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.567854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.567990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.568015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.568155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.568180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.568274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.568299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.568382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.568407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.568531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.568556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.568681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.568706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.568861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.568887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.568979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.569004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.569117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.569142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.569315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.569373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.569471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.569499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.569585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.569613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.569699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.569726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.569852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.569878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.569964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.569990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.570101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.570128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.570238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.570264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.570399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.570438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.570547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.570575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.570665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.570691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.570832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.570857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.570952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.570977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.567 [2024-11-19 10:56:06.571089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.567 [2024-11-19 10:56:06.571115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.567 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.571227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.571251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.571342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.571367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.571486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.571511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.571593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.571619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.571755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.571780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.571884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.571910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.571999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.572025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.572143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.572168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.572316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.572342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.572444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.572469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.572552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.572577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.572694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.572719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.572828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.572853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.572964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.572989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.573140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.573192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.573317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.573357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.573460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.573488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.573581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.573607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.573695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.573724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.573824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.573851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.573969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.573994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.574118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.574147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.574259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.574285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.574390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.574416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.574533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.574559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.574677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.574703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.574819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.574847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.574936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.574963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.575056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.575095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.575221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.575251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.575346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.568 [2024-11-19 10:56:06.575374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.568 qpair failed and we were unable to recover it. 00:28:19.568 [2024-11-19 10:56:06.575461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.575487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.575586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.575612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.575698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.575726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.575840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.575871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.575979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.576007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.576097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.576122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.576234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.576260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.576355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.576381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.576481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.576507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.576619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.576645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.576780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.576806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.576922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.576948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.577031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.577057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.577147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.577172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.577249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.577274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.577369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.577395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.577477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.577502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.577628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.577654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.577772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.577798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.577894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.577919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.578032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.578058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.578153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.578178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.578258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.578283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.578399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.578439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.578551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.578590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.578710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.578738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.578833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.578862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.578973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.578999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.579103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.579142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.579231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.579257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.579359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.579391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.569 qpair failed and we were unable to recover it. 00:28:19.569 [2024-11-19 10:56:06.579487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.569 [2024-11-19 10:56:06.579513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.579629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.579655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.579739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.579765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.579876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.579901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.580008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.580034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.580140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.580165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.580271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.580297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.580399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.580424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.580513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.580538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.580611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.580636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.580714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.580739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.580824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.580850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.580956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.580981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.581070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.581096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.581209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.581235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.581316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.581342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.581430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.581456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.581546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.581571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.581687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.581712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.581805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.581831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.581917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.581944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.582019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.582045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.582133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.582159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.582299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.582334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.582443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.582468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.582543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.582568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.582654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.582679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.582772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.582798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.582897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.582922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.570 qpair failed and we were unable to recover it. 00:28:19.570 [2024-11-19 10:56:06.583031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.570 [2024-11-19 10:56:06.583058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.583144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.583170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.583248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.583273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.583389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.583416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.583524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.583549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.583666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.583691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.583805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.583831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.583941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.583966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.584060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.584085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.584202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.584227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.584320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.584352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.584444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.584469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.584616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.584641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.584719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.584744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.584880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.584905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.584990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.585016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.585173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.585198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.585326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.585353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.585472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.585498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.585589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.585615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.585724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.585749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.585865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.585891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.586003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.586029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.586137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.586163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.586260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.586285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.586365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.586390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.586505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.586531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.586647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.586672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.571 [2024-11-19 10:56:06.586767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.571 [2024-11-19 10:56:06.586792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.571 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.586934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.586959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.587064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.587089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.587176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.587201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.587332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.587372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.587496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.587523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.587628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.587654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.587767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.587793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.587900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.587926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.588037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.588063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.588152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.588177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.588288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.588337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.588456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.588484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.588570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.588597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.588683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.588709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.588793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.588821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.588908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.588934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.589047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.589073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.589173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.589213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.589359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.589387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.589499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.589525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.589637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.589662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.589783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.589813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.589935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.589960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.590052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.590077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.590190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.590217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.590326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.590353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.590465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.590491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.590581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.590607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.590745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.590771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.590858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.590884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.591001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.572 [2024-11-19 10:56:06.591026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.572 qpair failed and we were unable to recover it. 00:28:19.572 [2024-11-19 10:56:06.591111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.591136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.591228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.591267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.591367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.591397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.591538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.591564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.591688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.591715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.591873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.591923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.592013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.592039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.592147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.592173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.592297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.592344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.592466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.592494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.592611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.592637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.592719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.592746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.592827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.592854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.593003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.593029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.593137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.593163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.593295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.593331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.593449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.593476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.593604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.593643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.593784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.593812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.593909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.593935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.594024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.594050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.594164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.594191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.594272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.594298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.594420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.594447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.594531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.594558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.594696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.594722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.594839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.594866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.595015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.595042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.595121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.595148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.595262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.595288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.595378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.595409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.573 qpair failed and we were unable to recover it. 00:28:19.573 [2024-11-19 10:56:06.595495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.573 [2024-11-19 10:56:06.595521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.595637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.595663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.595771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.595797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.595883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.595909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.596025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.596053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.596160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.596186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.596313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.596339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.596445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.596471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.596554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.596581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.596693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.596719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.596834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.596860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.596952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.596978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.597094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.597120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.597233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.597259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.597379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.597406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.597485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.597511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.597631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.597658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.597772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.597798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.597909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.597935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.598048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.598075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.598191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.598217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.598350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.598389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.598480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.598507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.598650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.598677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.598815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.598841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.598982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.599008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.599134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.599160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.599272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.599298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.599419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.599444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.599556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.599581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.599696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.574 [2024-11-19 10:56:06.599721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.574 qpair failed and we were unable to recover it. 00:28:19.574 [2024-11-19 10:56:06.599868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.599893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.599994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.600019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.600107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.600133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.600223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.600250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.600331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.600357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.600461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.600487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.600563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.600588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.600663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.600688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.600802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.600832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.600970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.600995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.601111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.601137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.601249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.601275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.601368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.601393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.601530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.601556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.601642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.601668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.601761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.601788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.601929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.601955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.602045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.602071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.602191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.602216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.602358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.602384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.602497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.602523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.602605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.602631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.602776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.602802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.602941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.602966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.603084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.603110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.603225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.603251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.603392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.603432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.603527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.603555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.603664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.603690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.603809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.603835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.603927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.603953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.575 qpair failed and we were unable to recover it. 00:28:19.575 [2024-11-19 10:56:06.604063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.575 [2024-11-19 10:56:06.604089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.604182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.604209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.604318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.604345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.604425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.604450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.604576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.604604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.604692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.604718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.604814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.604839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.604980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.605007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.605127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.605153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.605268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.605295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.605394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.605420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.605514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.605539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.605654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.605680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.605768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.605797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.605913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.605939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.606029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.606054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.606171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.606197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.606317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.606349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.606468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.606494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.606631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.606680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.606810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.606856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.607001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.607027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.607109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.607135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.607272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.607297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.607415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.607441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.607533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.607558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.607644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.607670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.576 [2024-11-19 10:56:06.607782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.576 [2024-11-19 10:56:06.607828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.576 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.607913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.607940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.608054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.608079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.608174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.608199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.608292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.608326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.608443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.608469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.608579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.608605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.608718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.608771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.608891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.608938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.609074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.609100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.609218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.609246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.609355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.609382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.609478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.609503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.609643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.609669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.609755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.609781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.609892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.609917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.610007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.610032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.610172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.610198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.610284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.610317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.610440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.610465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.610554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.610579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.610693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.610719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.610807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.610833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.610945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.610970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.611093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.611118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.611197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.611224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.611341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.611367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.611484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.611510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.611624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.611649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.611760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.611786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.611880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.611912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.612025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.577 [2024-11-19 10:56:06.612050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.577 qpair failed and we were unable to recover it. 00:28:19.577 [2024-11-19 10:56:06.612139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.612165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.612258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.612283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.612432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.612458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.612599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.612625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.612710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.612735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.612828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.612854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.612966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.612991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.613079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.613105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.613215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.613241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.613359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.613385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.613499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.613526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.613648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.613674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.613795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.613820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.613938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.613964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.614082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.614107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.614216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.614241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.614366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.614392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.614476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.614501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.614582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.614607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.614698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.614724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.614816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.614843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.614953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.614979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.615066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.615092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.615171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.615196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.615291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.615324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.615426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.615465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.615556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.615584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.615693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.615719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.615832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.615857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.616001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.616027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.616146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.578 [2024-11-19 10:56:06.616172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.578 qpair failed and we were unable to recover it. 00:28:19.578 [2024-11-19 10:56:06.616278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.616311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.616421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.616447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.616570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.616595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.616675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.616700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.616788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.616813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.616904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.616930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.617107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.617147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.617299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.617332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.617447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.617473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.617608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.617657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.617833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.617873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.618043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.618082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.618210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.618235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.618379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.618405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.618501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.618526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.618631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.618656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.618742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.618768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.618849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.618874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.618992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.619018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.619135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.619161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.619332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.619358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.619446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.619477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.619563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.619589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.619723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.619763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.619978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.620004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.620097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.620123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.620269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.620322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.620464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.620489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.620600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.620625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.620711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.620736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.620818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.620844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.579 [2024-11-19 10:56:06.620929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.579 [2024-11-19 10:56:06.620956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.579 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.621070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.621116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.621281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.621331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.621467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.621493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.621593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.621619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.621758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.621811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.621987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.622039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.622151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.622176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.622290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.622323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.622440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.622465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.622554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.622579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.622717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.622743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.622830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.622855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.622935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.622960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.623044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.623070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.623197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.623237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.623339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.623368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.623489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.623522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.623612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.623638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.623730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.623756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.623869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.623897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.624011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.624037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.624147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.624174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.624289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.624325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.624451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.624478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.624592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.624618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.624754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.624780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.624899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.624925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.625037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.625063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.625154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.625180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.625313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.625340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.625434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.580 [2024-11-19 10:56:06.625460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.580 qpair failed and we were unable to recover it. 00:28:19.580 [2024-11-19 10:56:06.625550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.625576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.625655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.625681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.625787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.625812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.625929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.625955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.626049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.626075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.626154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.626179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.626295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.626330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.626472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.626498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.626605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.626634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.626712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.626738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.626858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.626884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.627007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.627034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.627180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.627207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.627293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.627332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.627450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.627475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.627566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.627593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.627731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.627756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.627887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.627912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.628038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.628064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.628177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.628202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.628317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.628343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.628426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.628452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.628568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.628593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.628685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.628711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.628810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.628836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.628952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.628983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.629065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.629090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.629209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.629234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.629321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.629348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.629451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.629477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.629563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.629589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.581 qpair failed and we were unable to recover it. 00:28:19.581 [2024-11-19 10:56:06.629674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.581 [2024-11-19 10:56:06.629700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.629818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.629844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.629952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.629977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.630093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.630118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.630213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.630239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.630361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.630388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.630503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.630529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.630621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.630647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.630742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.630767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.630882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.630909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.631051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.631078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.631166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.631191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.631293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.631325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.631436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.631461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.631571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.631596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.631684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.631710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.631795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.631822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.631950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.631990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.632087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.632115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.632228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.632254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.632346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.632373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.632468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.632494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.632609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.632635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.632753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.632780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.632888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.632914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.633022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.633048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.633138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.633163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.633267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.582 [2024-11-19 10:56:06.633292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.582 qpair failed and we were unable to recover it. 00:28:19.582 [2024-11-19 10:56:06.633401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.633427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.633530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.633556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.633698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.633724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.633801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.633826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.633917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.633945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.634070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.634096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.634209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.634239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.634349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.634376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.634457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.634483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.634560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.634586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.634698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.634724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.634867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.634893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.634977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.635003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.635093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.635121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.635249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.635288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.635406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.635434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.635520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.635546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.635656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.635682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.635799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.635825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.635917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.635942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.636082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.636108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.636222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.636248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.636372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.636398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.636501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.636527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.636700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.636741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.636875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.636918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.637047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.637087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.637265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.637291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.637399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.637424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.637545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.637572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.637678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.583 [2024-11-19 10:56:06.637717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.583 qpair failed and we were unable to recover it. 00:28:19.583 [2024-11-19 10:56:06.637847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.637873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.638059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.638085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.638179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.638209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.638329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.638355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.638442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.638468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.638563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.638590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.638683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.638709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.638784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.638810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.638945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.638971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.639080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.639105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.639181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.639207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.639328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.639368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.639463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.639491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.639605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.639632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.639728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.639755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.639869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.639895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.639986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.640012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.640107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.640133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.640246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.640272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.640367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.640393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.640503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.640529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.640616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.640641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.640725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.640752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.640861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.640886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.640992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.641024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.641159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.641191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.641309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.641337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.641453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.641479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.641576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.641602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.641734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.641785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.641877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.641903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.584 qpair failed and we were unable to recover it. 00:28:19.584 [2024-11-19 10:56:06.642035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.584 [2024-11-19 10:56:06.642081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.642196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.642223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.642317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.642343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.642428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.642454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.642533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.642559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.642648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.642674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.642763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.642789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.642875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.642901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.643008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.643034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.643141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.643166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.643250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.643279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.643375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.643402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.643500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.643528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.643618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.643646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.643824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.643870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.643988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.644038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.644124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.644150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.644259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.644285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.644407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.644434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.644514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.644540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.644646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.644673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.644750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.644776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.644884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.644911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.645039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.645077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.645170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.645197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.645289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.645324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.645407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.645434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.645542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.645568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.645682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.645708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.645822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.645848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.645927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.645953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.585 qpair failed and we were unable to recover it. 00:28:19.585 [2024-11-19 10:56:06.646038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.585 [2024-11-19 10:56:06.646065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.646156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.646183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.646295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.646328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.646417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.646444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.646528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.646553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.646662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.646688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.646802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.646828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.646916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.646945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.647060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.647086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.647172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.647199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.647292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.647324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.647443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.647468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.647559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.647584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.647671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.647696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.647803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.647828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.647910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.647936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.648051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.648077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.648168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.648195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.648279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.648310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.648402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.648429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.648538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.648564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.648649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.648675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.648813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.648839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.648952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.648979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.649094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.649121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.649206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.649232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.649346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.649372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.649467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.649506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.649636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.649675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.649774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.649803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.649941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.649968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.586 [2024-11-19 10:56:06.650055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.586 [2024-11-19 10:56:06.650081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.586 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.650189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.650214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.650295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.650334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.650448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.650480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.650566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.650592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.650682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.650708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.650815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.650840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.650914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.650940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.651028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.651053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.651160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.651186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.651293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.651328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.651417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.651442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.651537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.651562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.651642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.651669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.651765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.651804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.651970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.652011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.652180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.652206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.652324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.652350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.652441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.652467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.652581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.652607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.652716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.652757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.652917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.652959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.653156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.653196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.653363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.653402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.653489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.653517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.653612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.653638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.653788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.587 [2024-11-19 10:56:06.653837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.587 qpair failed and we were unable to recover it. 00:28:19.587 [2024-11-19 10:56:06.654015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.654060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.654172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.654198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.654315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.654342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.654436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.654466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.654549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.654574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.654680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.654734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.654944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.655001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.655127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.655169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.655325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.655351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.655454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.655480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.655594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.655619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.655724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.655764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.655893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.655918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.656012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.656037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.656179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.656227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.656344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.656370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.656476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.656501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.656604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.656629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.656709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.656734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.656814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.656839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.656966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.656995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.657121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.657159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.657279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.657313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.657407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.657433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.657548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.657573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.657689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.657716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.657828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.657853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.657969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.657994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.658104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.658131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.658212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.658237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.658321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.588 [2024-11-19 10:56:06.658353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.588 qpair failed and we were unable to recover it. 00:28:19.588 [2024-11-19 10:56:06.658431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.658457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.658579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.658605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.658716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.658743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.658824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.658849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.658952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.658978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.659087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.659112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.659193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.659217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.659308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.659336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.659419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.659445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.659557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.659583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.659698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.659723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.659805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.659831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.659935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.659959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.660075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.660101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.660216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.660243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.660332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.660359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.660448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.660474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.660580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.660606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.660702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.660727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.660836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.660862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.660948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.660975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.661084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.661109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.661250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.661276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.661374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.661400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.661505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.661531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.661647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.661673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.661761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.661786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.661871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.661897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.661982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.662008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.662114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.662140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.662257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.662282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.589 [2024-11-19 10:56:06.662408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.589 [2024-11-19 10:56:06.662436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.589 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.662524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.662549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.662635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.662660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.662742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.662767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.662884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.662909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.663003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.663028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.663140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.663167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.663277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.663311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.663427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.663457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.663575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.663601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.663741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.663766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.663878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.663904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.663999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.664026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.664142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.664167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.664285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.664317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.664430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.664456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.664545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.664571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.664685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.664710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.664819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.664845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.664937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.664962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.665051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.665076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.665151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.665176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.665259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.665285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.665403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.665429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.665547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.665573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.665659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.665685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.665787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.665812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.665947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.665973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.666090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.666116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.666230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.666255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.666371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.666398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.666487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.666513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.590 qpair failed and we were unable to recover it. 00:28:19.590 [2024-11-19 10:56:06.666635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.590 [2024-11-19 10:56:06.666660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.666743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.666770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.666898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.666946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.667041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.667071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.667155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.667180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.667286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.667318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.667414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.667440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.667586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.667611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.667724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.667750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.667835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.667862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.667962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.667989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.668104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.668130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.668215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.668241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.668351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.668377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.668470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.668495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.668586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.668611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.668704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.668730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.668820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.668846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.668962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.668987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.669114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.669140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.669245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.669271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.669416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.669442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.669535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.669561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.669696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.669740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.669861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.669904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.670060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.670086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.670200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.670225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.670345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.670371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.670458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.670483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.670560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.670585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.670673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.670710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.670876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.591 [2024-11-19 10:56:06.670927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.591 qpair failed and we were unable to recover it. 00:28:19.591 [2024-11-19 10:56:06.671100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.671139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.671319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.671366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.671457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.671483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.671567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.671594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.671736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.671762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.671934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.671974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.672192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.672233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.672365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.672391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.672505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.672532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.672615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.672641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.672712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.672737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.672853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.672897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.673080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.673120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.673254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.673294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.673423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.673449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.673565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.673590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.673669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.673694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.673770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.673795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.673888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.673913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.674006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.674031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.674215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.674255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.674398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.674439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.674523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.674550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.674733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.674759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.674869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.674895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.674988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.675024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.675119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.675146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.675257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.675281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.675412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.675437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.675515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.675540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.675769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.675809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.592 [2024-11-19 10:56:06.675937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.592 [2024-11-19 10:56:06.675976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.592 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.676155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.676181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.676333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.676369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.676468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.676493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.676583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.676636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.676788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.676812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.676906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.676932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.677018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.677044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.677158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.677183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.677335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.677378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.677485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.677511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.677608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.677634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.677739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.677773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.677917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.677957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.678143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.678183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.678366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.678392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.678479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.678504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.678650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.678675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.678753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.678778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.678856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.678882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.678997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.679023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.679103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.679136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.679228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.679253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.679373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.679399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.679476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.679502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.679615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.679641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.679756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.679782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.679918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.679959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.593 [2024-11-19 10:56:06.680195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.593 [2024-11-19 10:56:06.680236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.593 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.680316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.680342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.680431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.680457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.680579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.680605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.680720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.680745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.680886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.680912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.681055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.681080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.681230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.681276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.681405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.681431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.681548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.681574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.681663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.681688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.681797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.681824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.682006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.682032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.682169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.682194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.682283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.682316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.682459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.682484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.682570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.682596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.682690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.682715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.682831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.682856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.682936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.682961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.683044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.683095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.683263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.683288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.683390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.683415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.683523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.683548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.683635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.683661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.683757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.683782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.683869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.683922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.684116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.684156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.684315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.684341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.684423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.684448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.684567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.684594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.684706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.684732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.594 qpair failed and we were unable to recover it. 00:28:19.594 [2024-11-19 10:56:06.684883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.594 [2024-11-19 10:56:06.684908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.685072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.685111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.685272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.685327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.685457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.685485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.685592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.685618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.685758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.685808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.685901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.685928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.686129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.686195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.686296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.686421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.686511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.686537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.686676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.686716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.686854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.686905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.687074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.687114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.687258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.687283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.687394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.687420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.687501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.687526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.687654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.687680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.687761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.687786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.687920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.687945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.688037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.688062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.688185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.688224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.688319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.688347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.688462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.688488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.688605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.688631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.688801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.688851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.688935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.688961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.689082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.689109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.689201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.689226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.689329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.689355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.689451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.689479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.689619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.689668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.689811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.595 [2024-11-19 10:56:06.689860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.595 qpair failed and we were unable to recover it. 00:28:19.595 [2024-11-19 10:56:06.690007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.690032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.690151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.690176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.690258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.690283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.690408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.690434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.690523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.690548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.690644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.690669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.690752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.690777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.690889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.690915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.691028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.691054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.691155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.691194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.691317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.691351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.691461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.691487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.691600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.691625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.691739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.691765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.691872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.691897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.692068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.692108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.692243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.692286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.692425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.692463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.692584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.692612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.692729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.692776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.692928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.692978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.693089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.693115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.693196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.693221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.693315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.693342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.693444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.693470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.693556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.693581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.693726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.693751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.693867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.693892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.694029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.694054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.694203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.694229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.694317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.694344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.596 qpair failed and we were unable to recover it. 00:28:19.596 [2024-11-19 10:56:06.694450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.596 [2024-11-19 10:56:06.694475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.694557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.694608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.694744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.694794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.694931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.694957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.695111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.695139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.695230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.695255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.695401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.695433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.695526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.695552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.695640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.695665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.695779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.695806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.695939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.695982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.696140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.696165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.696280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.696310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.696405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.696430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.696536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.696561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.696663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.696688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.696778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.696804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.696898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.696923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.697084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.697126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.697213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.697238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.697316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.697342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.697455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.697480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.697626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.697667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.697832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.697872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.698028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.698053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.698194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.698234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.698362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.698388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.698477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.698501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.698588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.698617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.698733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.698760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.698944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.597 [2024-11-19 10:56:06.698971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.597 qpair failed and we were unable to recover it. 00:28:19.597 [2024-11-19 10:56:06.699060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.699088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.699204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.699230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.699346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.699378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.699474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.699501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.699596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.699621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.699694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.699719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.699838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.699884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.700094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.700135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.700339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.700372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.700454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.700480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.700630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.700655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.700770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.700826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.701005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.701057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.701164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.701189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.701314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.701342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.701426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.701451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.701546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.701573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.701656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.701683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.701797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.701823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.701911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.701938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.702046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.702072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.702182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.702207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.702321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.702348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.702489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.702515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.702628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.702653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.702742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.702767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.702885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.702911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.598 [2024-11-19 10:56:06.703033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.598 [2024-11-19 10:56:06.703059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.598 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.703173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.703198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.703289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.703326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.703467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.703493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.703580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.703607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.703723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.703749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.703848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.703873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.703986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.704012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.704124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.704150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.704232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.704258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.704348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.704374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.704511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.704538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.704654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.704680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.704787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.704813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.704924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.704951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.705032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.705059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.705214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.705252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.705347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.705375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.705468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.705494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.705622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.705648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.705764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.705790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.705925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.705951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.706089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.706116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.706258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.706285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.706370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.706397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.706506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.706532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.706616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.706642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.706730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.706755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.706840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.706867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.706989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.707016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.707124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.707150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.599 [2024-11-19 10:56:06.707264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.599 [2024-11-19 10:56:06.707289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.599 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.707411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.707437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.707533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.707560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.707696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.707722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.707846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.707872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.707982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.708008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.708144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.708170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.708261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.708287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.708392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.708418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.708529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.708555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.708676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.708701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.708785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.708818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.708908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.708934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.709062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.709088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.709219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.709258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.709404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.709443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.709594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.709622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.709734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.709761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.709842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.709869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.709961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.709987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.710083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.710110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.710259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.710298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.710404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.710431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.710588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.710628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.710779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.710820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.710991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.711038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.711214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.711240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.711399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.711426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.711539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.711564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.711801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.711828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.711947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.711973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.712145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.600 [2024-11-19 10:56:06.712185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.600 qpair failed and we were unable to recover it. 00:28:19.600 [2024-11-19 10:56:06.712295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.712327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.712438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.712463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.712551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.712576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.712795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.712835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.712987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.713027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.713232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.713258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.713348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.713379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.713493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.713518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.713683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.713723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.713882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.713924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.714115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.714155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.714320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.714368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.714479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.714505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.714611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.714637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.714850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.714891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.715065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.715107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.715260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.715285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.715391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.715416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.715556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.715582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.715695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.715741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.715864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.715890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.716068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9f30 is same with the state(6) to be set 00:28:19.601 [2024-11-19 10:56:06.716295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.716341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.716466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.716493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.716607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.716633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.716747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.716775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.716915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.716961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.717076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.717101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.717191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.717219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.717336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.717362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.717452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.717477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.717593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.601 [2024-11-19 10:56:06.717618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.601 qpair failed and we were unable to recover it. 00:28:19.601 [2024-11-19 10:56:06.717807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.717832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.718006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.718046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.718229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.718269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.718431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.718458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.718550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.718582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.718702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.718728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.718887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.718926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.719056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.719081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.719218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.719244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.719402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.719428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.719517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.719542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.719630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.719656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.719751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.719776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.719922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.719947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.720030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.720056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.720227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.720274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.720442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.720482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.720596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.720624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.720750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.720777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.720894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.720920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.721009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.721036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.721133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.721159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.721263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.721290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.721394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.721421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.721509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.721535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.721670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.721697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.721780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.721806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.721887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.721913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.722032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.722059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.722180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.722207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.722343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.722370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.602 [2024-11-19 10:56:06.722453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.602 [2024-11-19 10:56:06.722479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.602 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.722558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.722584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.722696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.722722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.722837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.722864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.722979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.723006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.723145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.723171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.723270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.723298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.723421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.723447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.723531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.723557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.723712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.723754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.723892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.723918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.724061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.724102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.724225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.724252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.724380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.724407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.724499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.724527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.724708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.724734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.724879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.724905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.724989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.725015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.725127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.725154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.725319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.725359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.725455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.725482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.725567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.725593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.725680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.725705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.725824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.725851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.726002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.726055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.726145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.726171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.726314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.726341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.726433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.726459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.726551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.726578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.726685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.726711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.726826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.726852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.726970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.603 [2024-11-19 10:56:06.726995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.603 qpair failed and we were unable to recover it. 00:28:19.603 [2024-11-19 10:56:06.727134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.727159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.727246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.727271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.727396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.727422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.727520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.727545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.727633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.727660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.727802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.727827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.727948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.727974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.728087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.728113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.728316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.728342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.728454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.728480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.728619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.728645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.728760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.728785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.728915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.728964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.729090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.729115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.729229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.729255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.729346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.729374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.729492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.729518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.729627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.729653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.729793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.729819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.729934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.729960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.730099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.730125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.730241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.730266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.730368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.730395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.730496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.730535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.730659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.730687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.730808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.730836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.730953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.730979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.604 qpair failed and we were unable to recover it. 00:28:19.604 [2024-11-19 10:56:06.731093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.604 [2024-11-19 10:56:06.731119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.731217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.731256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.731362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.731390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.731502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.731528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.731644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.731671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.731760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.731791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.731933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.731958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.732048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.732073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.732150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.732176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.732288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.732325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.732416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.732442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.732546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.732571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.732679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.732705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.732799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.732825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.732933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.732958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.733039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.733065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.733158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.733184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.733326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.733365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.733462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.733489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.733640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.733666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.733751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.733776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.733908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.733934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.734014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.734040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.734153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.734178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.734298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.734333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.734419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.734445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.734549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.734574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.734678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.734704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.734813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.734839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.734928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.734954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.735069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.735096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.605 [2024-11-19 10:56:06.735183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.605 [2024-11-19 10:56:06.735208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.605 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.735335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.735365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.735480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.735505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.735648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.735673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.735763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.735788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.735902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.735927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.736033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.736058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.736137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.736162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.736269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.736295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.736387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.736412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.736505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.736530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.736677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.736702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.736790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.736817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.736933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.736958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.737077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.737102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.737236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.737276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.737385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.737413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.737524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.737551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.737649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.737675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.737785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.737811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.737902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.737929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.738042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.738068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.738174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.738212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.738313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.738340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.738424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.738450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.738533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.738558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.738676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.738702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.738814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.738839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.738983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.739028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.739141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.739168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.739249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.739275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.606 [2024-11-19 10:56:06.739391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.606 [2024-11-19 10:56:06.739417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.606 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.739533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.739560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.739647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.739675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.739763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.739790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.739873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.739899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.740009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.740034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.740116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.740141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.740266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.740292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.740420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.740446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.740592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.740618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.740757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.740808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.740967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.741018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.741161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.741187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.741294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.741326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.741447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.741473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.741586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.741612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.741723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.741748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.741868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.741893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.742004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.742030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.742143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.742169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.742278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.742308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.742399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.742425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.742540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.742565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.742682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.742707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.742795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.742821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.742922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.742962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.743209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.743248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.743390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.743416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.743529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.743555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.743648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.743675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.743787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.743813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.607 [2024-11-19 10:56:06.744011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.607 [2024-11-19 10:56:06.744036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.607 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.744112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.744137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.744272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.744297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.744390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.744416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.744527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.744553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.744633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.744658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.744770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.744811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.744962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.745016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.745209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.745249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.745436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.745462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.745573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.745599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.745727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.745752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.745864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.745889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.745993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.746032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.746248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.746273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.746393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.746418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.746508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.746533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.746608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.746633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.746744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.746770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.746900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.746940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.747065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.747118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.747262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.747287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.747391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.747416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.747504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.747529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.747640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.747666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.747778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.747804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.747915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.747941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.748033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.748059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.748139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.748164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.748357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.748383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.748461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.748486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.748570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.748596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.748703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.748728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.748891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.748931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.749186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.608 [2024-11-19 10:56:06.749226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.608 qpair failed and we were unable to recover it. 00:28:19.608 [2024-11-19 10:56:06.749363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.749389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.749505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.749530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.749642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.749667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.749747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.749773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.749849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.749874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.749946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.749971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.750068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.750107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.750203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.750232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.750351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.750380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.750501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.750552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.750669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.750696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.750804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.750832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.750918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.750950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.751058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.751083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.751175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.751200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.751312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.751338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.751486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.751512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.751592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.751617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.751742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.751770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.751866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.751892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.752012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.752039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.752184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.752210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.752291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.752323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.752433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.752458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.752564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.752590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.752681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.752707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.752829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.752855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.752944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.752970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.753057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.753082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.609 [2024-11-19 10:56:06.753164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.609 [2024-11-19 10:56:06.753189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.609 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.753309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.753355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.753552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.753593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.753758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.753783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.753888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.753913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.753990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.754037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.754220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.754245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.754408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.754435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.754551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.754576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.754691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.754715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.754827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.754860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.755012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.755037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.755146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.755172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.755249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.755274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.755396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.755422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.755504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.755530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.755646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.755671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.755747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.755772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.755857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.755882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.755986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.756011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.756091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.756116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.756200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.756225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.756393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.756433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.756591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.756645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.756807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.756861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.756976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.757028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.757145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.757172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.757266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.757293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.757450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.757477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.757593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.757619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.757716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.757742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.757881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.757923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.758089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.758128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.758279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.758329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.610 [2024-11-19 10:56:06.758493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.610 [2024-11-19 10:56:06.758534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.610 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.758660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.758700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.758868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.758908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.759157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.759203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.759376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.759402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.759486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.759511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.759631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.759670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.759838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.759878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.760005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.760044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.760242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.760266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.760369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.760396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.760506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.760531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.760681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.760720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.760854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.760894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.761083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.761123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.761312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.761341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.761432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.761459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.761551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.761577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.761695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.761745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.761879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.761930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.762007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.762033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.762130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.762156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.762237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.762262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.762385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.762418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.762554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.762578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.762688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.762713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.762827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.762852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.763009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.763048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.763258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.763284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.763387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.763412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.763559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.763606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.763772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.763812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.763948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.763972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.764090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.764118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.764204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.764230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.764327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.764354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.764492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.764519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.611 [2024-11-19 10:56:06.764631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.611 [2024-11-19 10:56:06.764658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.611 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.764744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.764771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.764878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.764905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.765020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.765046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.765138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.765164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.765247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.765274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.765402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.765428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.765513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.765539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.765657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.765683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.765766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.765793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.765898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.765925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.766024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.766064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.766151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.766179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.766273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.766301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.766424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.766450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.766559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.766584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.766676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.766702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.766817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.766842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.766955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.766980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.767070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.767095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.767178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.767207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.767330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.767357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.767489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.767533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.767670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.767696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.767826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.767875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.767972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.767998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.768092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.768119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.768235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.768260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.768351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.768377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.768504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.768544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.768671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.768712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.768874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.768913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.769065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.769092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.769207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.769233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.769387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.769440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.769538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.769593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.769745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.612 [2024-11-19 10:56:06.769794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.612 qpair failed and we were unable to recover it. 00:28:19.612 [2024-11-19 10:56:06.769975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.770025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.770141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.770169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.770282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.770313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.770407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.770433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.770565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.770605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.770727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.770767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.770917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.770957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.771111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.771174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.771257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.771282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.771430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.771469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.771577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.771632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.771776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.771819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.771994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.772045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.772138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.772166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.772264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.772289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.772416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.772441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.772555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.772582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.772800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.772826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.772941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.772966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.773110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.773153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.773273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.773326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.773487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.773512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.773603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.773650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.773817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.773856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.774028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.774068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.774269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.774321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.774459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.774485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.774565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.774591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.774684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.774709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.774844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.774884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.775017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.775068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.775239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.775264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.775397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.775423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.775532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.775557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.775631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.775656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.775770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.775795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.613 [2024-11-19 10:56:06.775880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.613 [2024-11-19 10:56:06.775935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.613 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.776099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.776139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.776299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.776366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.776466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.776491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.776609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.776634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.776777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.776818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.776941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.776985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.777175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.777215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.777339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.777387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.777512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.777537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.777647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.777672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.777807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.777847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.778018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.778058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.778175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.778214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.778367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.778393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.778488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.778514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.778626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.778651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.778797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.778823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.778919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.778943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.779031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.779078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.779211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.779253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.779413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.779439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.779564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.779589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.779701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.779726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.779799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.779824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.779915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.779940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.780128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.780154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.780268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.780294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.780415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.780444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.780525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.780551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.780654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.780679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.780756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.780781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.614 [2024-11-19 10:56:06.780865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.614 [2024-11-19 10:56:06.780890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.614 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.781006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.781031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.781149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.781188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.781338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.781385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.781524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.781550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.781693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.781732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.781920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.781960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.782116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.782156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.782273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.782329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.782533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.782597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.782771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.782811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.782980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.783020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.783167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.783207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.783385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.783412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.783527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.783553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.783701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.783741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.783917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.783942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.784077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.784102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.784272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.784323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.784482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.784523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.784650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.784689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.784850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.784890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.785008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.785048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.785185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.785233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.785435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.785477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.785603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.785643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.785803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.785843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.785969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.786008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.786173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.786214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.786381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.786423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.786633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.786659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.786752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.786777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.786869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.786919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.787113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.787153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.787321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.787363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.787438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.787464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.787544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.787570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.615 qpair failed and we were unable to recover it. 00:28:19.615 [2024-11-19 10:56:06.787660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.615 [2024-11-19 10:56:06.787685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.787819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.787860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.788002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.788041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.788211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.788251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.788398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.788457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.788683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.788708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.788824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.788849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.789016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.789057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.789248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.789288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.789468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.789508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.789655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.789696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.789827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.789869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.789980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.790020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.790172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.790218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.790413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.790454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.790593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.790633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.790827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.790866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.790978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.791018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.791199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.791238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.791430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.791494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.791653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.791714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.791849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.791888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.792079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.792119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.792288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.792342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.792516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.792574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.792739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.792779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.792966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.793006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.793181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.793222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.793399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.793458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.793665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.793691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.793828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.793852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.793957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.793981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.794156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.794197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.794315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.794340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.794477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.794533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.794708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.794766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.794938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.794976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.795176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.795222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.616 qpair failed and we were unable to recover it. 00:28:19.616 [2024-11-19 10:56:06.795316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.616 [2024-11-19 10:56:06.795341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.795435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.795460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.795576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.795600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.795728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.795768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.795963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.796002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.796165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.796203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.796519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.796558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.796703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.796741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.796871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.796908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.797049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.797087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.797221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.797260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.797384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.797423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.797545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.797583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.797775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.797800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.797908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.797933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.798044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.798069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.798187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.798213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.798342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.798383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.798552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.798592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.798752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.798792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.798922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.798949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.799094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.799120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.799235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.799260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.799435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.799477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.799641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.799682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.799811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.799851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.800013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.800053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.800180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.800206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.800318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.800344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.800441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.800466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.800587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.800612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.800726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.800766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.800915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.800954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.801151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.801199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.801314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.801340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.801450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.801475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.801583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.801608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.801729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.801769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.801904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.801943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.617 qpair failed and we were unable to recover it. 00:28:19.617 [2024-11-19 10:56:06.802109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.617 [2024-11-19 10:56:06.802149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.802280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.802332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.802461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.802487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.802606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.802631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.802738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.802768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.802889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.802929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.803066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.803106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.803273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.803298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.803394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.803420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.803538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.803564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.803653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.803679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.803802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.803827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.803902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.803928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.804032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.804057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.804198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.804223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.804370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.804411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.804563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.804589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.804678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.804704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.804856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.804882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.804957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.804983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.805120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.805160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.805322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.805362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.805481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.805522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.805687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.805730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.805901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.805941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.806109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.806150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.806273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.806332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.806432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.806457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.806572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.806598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.806777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.806818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.806949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.806997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.807106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.807136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.807224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.807250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.807339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.807366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.807456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.807481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.807594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.807620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.807730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.807756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.807845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.807870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.807966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.807991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.808097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.808148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.808331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.808357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.808467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.808492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.618 [2024-11-19 10:56:06.808584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.618 [2024-11-19 10:56:06.808610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.618 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.808718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.808743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.808831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.808882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.809045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.809086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.809211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.809250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.809413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.809439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.809532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.809557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.809663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.809688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.809771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.809796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.809896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.809921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.810063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.810103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.810229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.810270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.810389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.810415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.810504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.810529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.810640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.810666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.810774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.810799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.810910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.810936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.811020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.811046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.811155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.811195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.811363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.811404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.811571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.811611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.811737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.811776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.811927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.811967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.812129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.812171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.812300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.812354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.812476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.812516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.812644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.812686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.812868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.812916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.813039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.813064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.813171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.813196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.813276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.813309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.813425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.813450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.813562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.813588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.813697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.813722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.813885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.813911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.814003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.814029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.619 [2024-11-19 10:56:06.814108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.619 [2024-11-19 10:56:06.814134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.619 qpair failed and we were unable to recover it. 00:28:19.620 [2024-11-19 10:56:06.814229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.620 [2024-11-19 10:56:06.814254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.620 qpair failed and we were unable to recover it. 00:28:19.620 [2024-11-19 10:56:06.814344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.620 [2024-11-19 10:56:06.814370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.620 qpair failed and we were unable to recover it. 00:28:19.620 [2024-11-19 10:56:06.814479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.620 [2024-11-19 10:56:06.814505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.620 qpair failed and we were unable to recover it. 00:28:19.620 [2024-11-19 10:56:06.814613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.620 [2024-11-19 10:56:06.814638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.620 qpair failed and we were unable to recover it. 00:28:19.620 [2024-11-19 10:56:06.814753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.620 [2024-11-19 10:56:06.814778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.620 qpair failed and we were unable to recover it. 00:28:19.620 [2024-11-19 10:56:06.814862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.620 [2024-11-19 10:56:06.814889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.620 qpair failed and we were unable to recover it. 00:28:19.620 [2024-11-19 10:56:06.815025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.620 [2024-11-19 10:56:06.815051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.620 qpair failed and we were unable to recover it. 00:28:19.620 [2024-11-19 10:56:06.815169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.620 [2024-11-19 10:56:06.815195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.620 qpair failed and we were unable to recover it. 00:28:19.620 [2024-11-19 10:56:06.815323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.620 [2024-11-19 10:56:06.815365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.620 qpair failed and we were unable to recover it. 00:28:19.620 [2024-11-19 10:56:06.815538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.620 [2024-11-19 10:56:06.815578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.620 qpair failed and we were unable to recover it. 00:28:19.620 [2024-11-19 10:56:06.815705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.620 [2024-11-19 10:56:06.815744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.620 qpair failed and we were unable to recover it. 00:28:19.620 [2024-11-19 10:56:06.815881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.620 [2024-11-19 10:56:06.815921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.620 qpair failed and we were unable to recover it. 00:28:19.620 [2024-11-19 10:56:06.816073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.620 [2024-11-19 10:56:06.816099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.620 qpair failed and we were unable to recover it. 00:28:19.620 [2024-11-19 10:56:06.816217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.620 [2024-11-19 10:56:06.816242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.620 qpair failed and we were unable to recover it. 00:28:19.620 [2024-11-19 10:56:06.816358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.620 [2024-11-19 10:56:06.816384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.620 qpair failed and we were unable to recover it. 00:28:19.620 [2024-11-19 10:56:06.816461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.620 [2024-11-19 10:56:06.816486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.620 qpair failed and we were unable to recover it. 00:28:19.620 [2024-11-19 10:56:06.816571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.620 [2024-11-19 10:56:06.816597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.620 qpair failed and we were unable to recover it. 00:28:19.620 [2024-11-19 10:56:06.816675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.620 [2024-11-19 10:56:06.816700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.620 qpair failed and we were unable to recover it. 00:28:19.620 [2024-11-19 10:56:06.816788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.620 [2024-11-19 10:56:06.816813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.620 qpair failed and we were unable to recover it. 00:28:19.620 [2024-11-19 10:56:06.816895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.620 [2024-11-19 10:56:06.816920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.620 qpair failed and we were unable to recover it. 00:28:19.620 [2024-11-19 10:56:06.816996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.817028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.817117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.817142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.817226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.817251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.817336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.817385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.817560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.817600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.817762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.817802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.817958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.817997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.818125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.818176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.818291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.818327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.818471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.818495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.818584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.818609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.818696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.818740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.818870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.818909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.819036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.819075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.819254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.819294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.819448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.819473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.819584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.819609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.819717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.819743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.819818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.819843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.819930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.819957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.820117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.820157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.820343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.820386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.820583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.820608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.820716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.820742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.820825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.820850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.820943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.820968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.821107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.821147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.821296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.821352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.621 [2024-11-19 10:56:06.821514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.621 [2024-11-19 10:56:06.821553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.621 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.821692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.821732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.821873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.821898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.822011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.822036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.822129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.822155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.822231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.822256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.822371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.822396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.822534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.822574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.822746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.822772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.822860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.822885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.822999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.823024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.823098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.823124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.823233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.823258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.823422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.823463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.823594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.823636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.823814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.823854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.824023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.824069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.824183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.824208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.824330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.824357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.824442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.824467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.824543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.824568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.824647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.824673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.824760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.824785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.824859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.824884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.825020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.825046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.825163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.825188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.825332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.825363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.825509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.825549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.825702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.825727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.825835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.622 [2024-11-19 10:56:06.825860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.622 qpair failed and we were unable to recover it. 00:28:19.622 [2024-11-19 10:56:06.826000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.826026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.826190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.826229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.826365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.826406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.826531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.826571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.826709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.826754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.826848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.826873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.826957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.826983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.827076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.827101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.827228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.827283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.827418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.827459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.827634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.827673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.827781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.827821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.827971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.828011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.828167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.828206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.828375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.828411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.828518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.828544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.828705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.828746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.828897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.828937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.829095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.829147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.829355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.829381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.829493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.829518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.829602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.829627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.829718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.829743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.829854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.829879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.829971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.829997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.830107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.830149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.830317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.830368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.830484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.830511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.830630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.830656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.623 qpair failed and we were unable to recover it. 00:28:19.623 [2024-11-19 10:56:06.830769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.623 [2024-11-19 10:56:06.830795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.830908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.830934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.831094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.831133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.831321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.831362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.831521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.831561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.831704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.831744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.831923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.831949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.832088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.832113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.832230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.832270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.832489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.832530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.832692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.832732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.832893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.832943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.833032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.833059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.833173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.833198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.833315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.833341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.833446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.833472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.833557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.833582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.833677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.833703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.833785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.833810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.833951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.833976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.834095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.834120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.834268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.834321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.834524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.834565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.834702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.834744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.834887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.834927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.835127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.835152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.835267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.835293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.835385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.835410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.624 qpair failed and we were unable to recover it. 00:28:19.624 [2024-11-19 10:56:06.835491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.624 [2024-11-19 10:56:06.835516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.835626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.835652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.835814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.835854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.836042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.836081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.836241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.836282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.836428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.836454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.836571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.836597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.836739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.836769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.836877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.836903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.837051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.837091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.837246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.837286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.837428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.837468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.837632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.837673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.837837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.837877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.838043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.838082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.838217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.838257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.838403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.838445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.838644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.838684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.838842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.838886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.838965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.838992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.839105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.839130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.839215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.839240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.839327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.839355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.839492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.839517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.839644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.839684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.839847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.839887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.625 qpair failed and we were unable to recover it. 00:28:19.625 [2024-11-19 10:56:06.840054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.625 [2024-11-19 10:56:06.840079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.840194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.840220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.840315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.840341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.840467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.840493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.840575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.840601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.840798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.840838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.840980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.841020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.841198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.841223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.841330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.841361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.841458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.841483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.841568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.841594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.841686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.841712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.841822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.841847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.841952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.842005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.842125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.842178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.842294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.842326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.842419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.842474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.842640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.842665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.842791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.842816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.842935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.842960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.843040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.843065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.843167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.843192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.843282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.843315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.843441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.843481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.843629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.843669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.843835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.843874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.844004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.844030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.844167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.844192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.844278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.844311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.626 [2024-11-19 10:56:06.844406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.626 [2024-11-19 10:56:06.844431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.626 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.844546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.844571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.844650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.844697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.844833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.844880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.845019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.845044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.845136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.845161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.845242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.845268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.845411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.845453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.845585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.845627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.845817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.845858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.845988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.846031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.846146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.846171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.846314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.846340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.846446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.846495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.846655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.846695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.846853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.846893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.847053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.847093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.847222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.847274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.847398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.847423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.847540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.847565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.847693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.847733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.847867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.847895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.848052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.848094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.848232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.848276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.848434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.848461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.848582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.848608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.848704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.848730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.848842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.848869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.848945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.848972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.849062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.627 [2024-11-19 10:56:06.849089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.627 qpair failed and we were unable to recover it. 00:28:19.627 [2024-11-19 10:56:06.849177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.849203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.849288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.849354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.849528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.849569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.849738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.849786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.849896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.849922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.850049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.850090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.850278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.850328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.850474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.850515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.850652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.850693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.850830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.850871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.850987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.851014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.851104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.851131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.851217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.851244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.851388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.851415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.851498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.851543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.851709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.851752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.851915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.851958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.852096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.852124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.852217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.852244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.852363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.852390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.852506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.852532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.852618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.852644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.852753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.852779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.852920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.852961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.853121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.853162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.853358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.853400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.853574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.853602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.853687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.853712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.853845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.628 [2024-11-19 10:56:06.853886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.628 qpair failed and we were unable to recover it. 00:28:19.628 [2024-11-19 10:56:06.854015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.854057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.854214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.854256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.854408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.854451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.854619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.854663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.854776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.854803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.854885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.854911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.855000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.855026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.855165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.855192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.855280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.855311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.855399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.855426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.855570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.855612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.855723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.855748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.855829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.855855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.855972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.855998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.856076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.856107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.856218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.856244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.856367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.856411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.856636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.856678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.856844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.856869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.856977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.857002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.857091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.857118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.857243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.857318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.857496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.857539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.857673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.857714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.857887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.857930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.858049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.858075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.858160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.858185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.858291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.858325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.858427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.629 [2024-11-19 10:56:06.858453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.629 qpair failed and we were unable to recover it. 00:28:19.629 [2024-11-19 10:56:06.858561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.858587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.858701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.858741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.858881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.858921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.859066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.859091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.859226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.859251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.859342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.859367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.859457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.859482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.859604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.859629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.859715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.859740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.859855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.859895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.860010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.860049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.860163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.860189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.860281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.860321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.860441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.860466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.860554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.860579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.860695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.860720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.860834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.860874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.861046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.861085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.861225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.861250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.861362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.861388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.861499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.861525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.861611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.861637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.861742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.861768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.861893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.861933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.862065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.862105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.862271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.862323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.862464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.862505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-11-19 10:56:06.862668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.630 [2024-11-19 10:56:06.862708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.862865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.862905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.863096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.863137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.863295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.863349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.863484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.863524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.863670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.863720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.863846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.863871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.863983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.864007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.864112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.864136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.864224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.864249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.864336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.864363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.864473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.864498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.864611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.864651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.864813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.864838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.864945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.864971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.865080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.865105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.865191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.865216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.865350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.865376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.865483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.865523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.865682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.865722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.865856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.865883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.866021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.866046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.866160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.866185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.866289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.866323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.866449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.866475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.866576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.866616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.866782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.866812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.631 [2024-11-19 10:56:06.866923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.631 [2024-11-19 10:56:06.866948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.631 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.867061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.867087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.867174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.867200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.867296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.867328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.867405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.867456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.867618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.867656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.867786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.867825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.867974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.868000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.868085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.868110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.868194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.868220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.868310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.868336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.868430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.868455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.868563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.868588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.868703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.868741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.868907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.868932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.869016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.869042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.869150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.869175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.869265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.869291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.869391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.869416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.869558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.869597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.869735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.869774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.869969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.869994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.870079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.870105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.870187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.870212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.870331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.870357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.870554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.870591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.870742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.870785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.870943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.870986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.871098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.871124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.871235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.632 [2024-11-19 10:56:06.871260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.632 qpair failed and we were unable to recover it. 00:28:19.632 [2024-11-19 10:56:06.871347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.871373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.871462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.871487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.871576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.871601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.871791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.871826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.872006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.872052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.872186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.872211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.872297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.872329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.872416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.872441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.872528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.872554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.872667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.872692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.872778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.872803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.872944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.872970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.873054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.873079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.873171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.873210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.873324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.873353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.873501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.873537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.873720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.873746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.873885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.873912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.874021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.874047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.874137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.874164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.874254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.874279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.874375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.874401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.874485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.874511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.874598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.874629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.874705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.874730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.874850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.874885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.874989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.875033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.875139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.875165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.633 qpair failed and we were unable to recover it. 00:28:19.633 [2024-11-19 10:56:06.875254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.633 [2024-11-19 10:56:06.875282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.875413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.875451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.875536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.875563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.875647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.875693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.875836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.875871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.876005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.876053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.876148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.876175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.876261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.876289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.876412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.876438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.876528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.876555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.876641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.876667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.876765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.876800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.876930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.876964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.877173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.877200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.877332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.877360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.877449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.877474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.877612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.877645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.877807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.877840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.877987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.878020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.878118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.878160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.878267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.878293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.878413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.878439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.878512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.878543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.878656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.878682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.878794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.878825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.878952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.878984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.879124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.879167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.879278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.879311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.879400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.634 [2024-11-19 10:56:06.879426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.634 qpair failed and we were unable to recover it. 00:28:19.634 [2024-11-19 10:56:06.879504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.879530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.879654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.879679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.879789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.879814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.879918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.879949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.880051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.880083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.880222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.880247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.880332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.880358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.880486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.880511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.880620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.880646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.880760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.880785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.880870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.880897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.881025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.881075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.881195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.881222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.881369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.881408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.881502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.881529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.881621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.881648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.881740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.881766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.881903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.881928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.882047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.882072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.882211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.882236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.882332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.882364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.882454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.882480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.882564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.882589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.882682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.882724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.882882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.882912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.883045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.883084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.883177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.883205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.883288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.883324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.883412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.883438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.635 [2024-11-19 10:56:06.883549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.635 [2024-11-19 10:56:06.883575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.635 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.883689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.883715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.883795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.883821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.883905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.883930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.884022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.884047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.884166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.884191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.884278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.884308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.884396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.884421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.884537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.884563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.884756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.884782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.884867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.884893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.884986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.885012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.885123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.885148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.885268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.885293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.885439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.885464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.885553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.885578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.885693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.885720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.885822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.885852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.885967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.885996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.886097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.886127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.886254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.886283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.886404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.886430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.886547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.886572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.886658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.886684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.886769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.886795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.886884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.886926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.887018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.887047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.636 qpair failed and we were unable to recover it. 00:28:19.636 [2024-11-19 10:56:06.887168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.636 [2024-11-19 10:56:06.887197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.887314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.887340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.887427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.887453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.887567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.887593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.887680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.887710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.887793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.887818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.887908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.887947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.888039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.888067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.888180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.888206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.888294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.888326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.888408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.888435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.888520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.888546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.888645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.888674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.888767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.888796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.888940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.888967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.889109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.889135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.889224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.889250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.889340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.889367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.889458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.889484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.889573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.889600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.889685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.889712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.889827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.889853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.889951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.889982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.890107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.890136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.890266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.890291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.890423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.637 [2024-11-19 10:56:06.890449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.637 qpair failed and we were unable to recover it. 00:28:19.637 [2024-11-19 10:56:06.890535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.890560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.890640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.890666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.890761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.890786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.890867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.890893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.890978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.891003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.891107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.891151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.891240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.891265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.891356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.891382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.891496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.891522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.891636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.891681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.891816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.891842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.891951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.891979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.892076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.892104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.892219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.892244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.892360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.892386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.892479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.892505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.892594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.892619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.892706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.892732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.892818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.892848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.892932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.892962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.893079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.893121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.893280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.893311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.893420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.893445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.893564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.893592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.893706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.893732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.893853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.893878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.638 [2024-11-19 10:56:06.893960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.638 [2024-11-19 10:56:06.893986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.638 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.894096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.894121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.894202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.894228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.894312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.894338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.894428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.894471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.894551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.894579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.894714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.894741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.894849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.894874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.895000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.895026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.895115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.895140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.895243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.895270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.895414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.895442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.895563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.895604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.895721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.895747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.895835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.895860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.895979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.896004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.896096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.896122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.896201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.896226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.896355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.896383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.896507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.896534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.896653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.896679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.896803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.896829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.896939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.896965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.897047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.897074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.897182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.897208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.897344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.897371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.897479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.639 [2024-11-19 10:56:06.897505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.639 qpair failed and we were unable to recover it. 00:28:19.639 [2024-11-19 10:56:06.897639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.897666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.897776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.897802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.897912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.897938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.898052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.898077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.898158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.898199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.898332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.898379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.898503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.898532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.898664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.898690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.898769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.898795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.898910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.898936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.899041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.899067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.899147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.899173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.899266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.899292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.899410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.899436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.899519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.899547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.899659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.899685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.899767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.899793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.899878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.899903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.899991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.900017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.900137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.900164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.900280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.900320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.900413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.900439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.900531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.900557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.900676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.900702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.900815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.900840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.900925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.900951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.901043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.640 [2024-11-19 10:56:06.901071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.640 qpair failed and we were unable to recover it. 00:28:19.640 [2024-11-19 10:56:06.901183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.901209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.901341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.901367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.901455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.901481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.901569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.901594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.901676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.901701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.901807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.901834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.901947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.901974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.902087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.902112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.902224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.902249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.902336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.902362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.902445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.902471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.902551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.902576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.902664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.902690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.902778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.902805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.902890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.902915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.902993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.903018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.903102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.903128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.903238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.903263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.903386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.903417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.903527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.903553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.903661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.903686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.903776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.903801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.903880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.903905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.904045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.904070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.904191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.904216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.904337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.904362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.904446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.904472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.641 [2024-11-19 10:56:06.904552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.641 [2024-11-19 10:56:06.904578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.641 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.904674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.904699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.904836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.904862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.904955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.904980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.905062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.905087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.905210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.905236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.905322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.905348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.905434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.905459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.905573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.905598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.905685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.905711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.905794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.905820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.905943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.905982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.906120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.906148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.906269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.906295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.906394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.906422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.906521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.906546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.906660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.906686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.906774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.906800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.906915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.906941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.907024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.907049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.907152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.907178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.907270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.907295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.907395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.907420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.907534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.907562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.907669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.907694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.907810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.907836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.907926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.907951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.908029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.908054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.642 [2024-11-19 10:56:06.908150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.642 [2024-11-19 10:56:06.908176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.642 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.908253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.908279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.908366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.908392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.908478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.908508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.908603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.908630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.908717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.908742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.908859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.908884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.908961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.908987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.909079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.909104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.909184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.909210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.909321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.909348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.909453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.909478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.909671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.909698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.909850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.909876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.909969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.909993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.910077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.910103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.910192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.910217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.910354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.910380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.910465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.910492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.910577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.910602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.910717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.910742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.910832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.910857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.910940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.910965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.911053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.911079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.911157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.911182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.643 [2024-11-19 10:56:06.911296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.643 [2024-11-19 10:56:06.911330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.643 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.911416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.911443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.911563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.911590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.911709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.911734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.911848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.911874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.911963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.911989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.912099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.912126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.912212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.912238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.912357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.912383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.912469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.912496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.912575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.912601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.912693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.912718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.912828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.912855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.912972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.912997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.913084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.913110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.913215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.913240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.913355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.913382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.913491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.913516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.913610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.913639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.913771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.913796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.913875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.913900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.914018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.914043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.914118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.914144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.914253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.914278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.914369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.914397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.914506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.914532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.914647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.914672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.914755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.914782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.914874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.644 [2024-11-19 10:56:06.914900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.644 qpair failed and we were unable to recover it. 00:28:19.644 [2024-11-19 10:56:06.914998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.915023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.915133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.915158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.915249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.915274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.915397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.915423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.915537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.915563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.915690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.915716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.915829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.915855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.915970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.915998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.916114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.916139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.916250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.916275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.916385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.916412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.916500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.916525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.916661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.916686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.916769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.916795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.916911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.916936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.917026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.917052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.917172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.917200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.917325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.917352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.917467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.917493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.917621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.917646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.917737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.917763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.917855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.917880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.917992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.918019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.918137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.918163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.918280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.918309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.918421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.918447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.918530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.918556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.918647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.918672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.918814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.918841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.645 qpair failed and we were unable to recover it. 00:28:19.645 [2024-11-19 10:56:06.918918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.645 [2024-11-19 10:56:06.918949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.919061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.919086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.919169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.919194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.919306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.919332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.919451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.919476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.919565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.919590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.919674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.919699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.919782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.919807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.919950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.919977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.920091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.920117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.920225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.920251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.920352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.920380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.920463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.920488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.920570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.920596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.920714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.920741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.920855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.920880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.920965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.920992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.921111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.921136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.921251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.921277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.921379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.921406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.921497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.921522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.921607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.921632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.921770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.921795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.921933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.921960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.922051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.922077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.922157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.922182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.922256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.922281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.922373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.922404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.922517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.646 [2024-11-19 10:56:06.922542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.646 qpair failed and we were unable to recover it. 00:28:19.646 [2024-11-19 10:56:06.922624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.922650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.922786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.922810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.922898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.922925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.923035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.923063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.923202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.923227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.923350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.923376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.923465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.923491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.923604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.923629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.923720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.923745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.923832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.923857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.923946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.923972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.924081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.924106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.924193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.924219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.924334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.924360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.924439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.924465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.924570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.924595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.924684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.924710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.924805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.924830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.924916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.924941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.925027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.925053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.925170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.925195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.925280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.925318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.925399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.925424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.925532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.925557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.925633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.925658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.925748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.925773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.925854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.925879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.925973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.925998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.926075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.647 [2024-11-19 10:56:06.926100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.647 qpair failed and we were unable to recover it. 00:28:19.647 [2024-11-19 10:56:06.926184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.926209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.926324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.926352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.926446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.926471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.926578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.926604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.926695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.926721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.926809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.926835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.926960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.926985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.927104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.927131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.927221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.927246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.927360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.927390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.927501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.927527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.927668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.927692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.927807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.927833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.927949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.927975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.928090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.928114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.928222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.928247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.928386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.928414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.928532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.928559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.928644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.928669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.928785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.928810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.928916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.928941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.929058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.929084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.929189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.929215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.929308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.929335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.929450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.929475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.929592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.929618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.929723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.929749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.929838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.929863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.929978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.648 [2024-11-19 10:56:06.930006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.648 qpair failed and we were unable to recover it. 00:28:19.648 [2024-11-19 10:56:06.930122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.930147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.930269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.930294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.930419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.930444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.930528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.930554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.930661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.930686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.930805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.930831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.930943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.930968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.931089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.931115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.931199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.931224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.931315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.931341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.931454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.931479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.931568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.931593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.931704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.931730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.931838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.931863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.931984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.932013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.932104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.932129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.932216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.932242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.932329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.932368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.932512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.932538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.932627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.932653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.932767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.932797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.932884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.932909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.932996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.933022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.933133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.933160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.933244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.933269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.649 [2024-11-19 10:56:06.933359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.649 [2024-11-19 10:56:06.933385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.649 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.933495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.933520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.933597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.933622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.933704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.933730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.933838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.933863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.933973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.933998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.934105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.934131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.934248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.934273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.934390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.934416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.934532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.934557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.934675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.934700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.934783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.934808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.934898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.934924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.935069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.935098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.935189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.935215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.935297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.935328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.935440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.935466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.935545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.935570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.935682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.935707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.935844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.935869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.935952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.935977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.936081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.936106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.936205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.936232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.936372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.936399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.936476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.936501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.936580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.936606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.936694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.936720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.936803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.936828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.936913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.650 [2024-11-19 10:56:06.936939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.650 qpair failed and we were unable to recover it. 00:28:19.650 [2024-11-19 10:56:06.937029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.937055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.937140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.937166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.937262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.937290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.937410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.937436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.937527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.937553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.937661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.937686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.937797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.937828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.937915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.937940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.938022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.938049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.938135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.938160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.938271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.938297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.938394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.938420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.938508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.938535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.938622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.938647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.938750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.938775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.938863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.938890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.938989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.939014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.939128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.939154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.939241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.939267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.939384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.939410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.939556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.939582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.939678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.939704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.939823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.939848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.939929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.939955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.940073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.940100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.940177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.940203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.940287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.940319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.940411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.940438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.940558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.651 [2024-11-19 10:56:06.940584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.651 qpair failed and we were unable to recover it. 00:28:19.651 [2024-11-19 10:56:06.940694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.940720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.940802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.940828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.940909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.940936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.941081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.941120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.941239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.941266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.941360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.941387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.941498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.941523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.941616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.941641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.941730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.941755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.941894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.941920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.942008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.942033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.942122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.942147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.942238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.942265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.942370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.942396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.942486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.942512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.942600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.942626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.942743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.942769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.942909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.942940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.943056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.943084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.943170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.943195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.943316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.943343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.943435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.943462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.943548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.943573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.943664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.943691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.943776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.943801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.943888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.943914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.944023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.944048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.944159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.944186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.652 [2024-11-19 10:56:06.944280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.652 [2024-11-19 10:56:06.944312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.652 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.944426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.944452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.944543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.944570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.944680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.944706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.944824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.944849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.944962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.944989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.945073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.945098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.945238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.945263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.945355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.945381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.945464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.945489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.945572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.945597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.945685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.945711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.945827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.945852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.945963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.945988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.946069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.946094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.946206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.946231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.946321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.946347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.946457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.946484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.946575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.946600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.946682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.946708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.946793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.946818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.946935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.946960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.947047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.947072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.947160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.947185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.947299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.947329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.947417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.947443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.947538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.947563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.947673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.947698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.947774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.947799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.947885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.653 [2024-11-19 10:56:06.947918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.653 qpair failed and we were unable to recover it. 00:28:19.653 [2024-11-19 10:56:06.948011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.948039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.948148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.948173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.948273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.948299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.948394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.948422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.948535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.948561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.948642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.948668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.948755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.948781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.948899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.948925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.949041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.949068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.949158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.949184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.949320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.949347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.949431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.949457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.949596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.949621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.949717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.949743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.949855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.949882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.949977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.950003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.950115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.950141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.950253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.950279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.950377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.950405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.950518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.950546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.950660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.950686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.950762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.950788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.950872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.950898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.950993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.951019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.951132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.951158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.951257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.951283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.951433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.951471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.951589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.951616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.951706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.654 [2024-11-19 10:56:06.951733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.654 qpair failed and we were unable to recover it. 00:28:19.654 [2024-11-19 10:56:06.951851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.951877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.951991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.952016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.952132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.952158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.952247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.952275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.952369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.952396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.952481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.952507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.952586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.952612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.952718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.952744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.952826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.952851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.952960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.952986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.953069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.953099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.953219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.953244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.953334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.953363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.953450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.953475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.953568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.953595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.953684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.953709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.953849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.953874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.953959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.953984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.954079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.954107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.954200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.954226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.954338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.954365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.954481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.954507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.954595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.954622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.954699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.954726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.954826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.954853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.655 qpair failed and we were unable to recover it. 00:28:19.655 [2024-11-19 10:56:06.954965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.655 [2024-11-19 10:56:06.954990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.955088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.955114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.955205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.955231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.955314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.955340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.955425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.955451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.955534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.955561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.955676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.955701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.955818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.955844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.955958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.955986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.956104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.956131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.956217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.956244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.956353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.956380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.956478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.956504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.956633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.956659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.956774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.956800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.956891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.956918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.957034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.957060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.957140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.957166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.957284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.957316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.957403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.957429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.957549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.957575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.957668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.957694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.957807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.957833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.957974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.958002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.958115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.958140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.958252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.958284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.958388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.958414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.958505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.958531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.958615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.958640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.656 qpair failed and we were unable to recover it. 00:28:19.656 [2024-11-19 10:56:06.958729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.656 [2024-11-19 10:56:06.958755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.958870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.958895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.959011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.959036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.959175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.959200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.959286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.959321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.959410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.959435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.959552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.959577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.959656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.959681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.959771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.959796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.959886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.959912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.959995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.960020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.960103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.960128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.960242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.960271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.960370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.960396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.960481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.960508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.960615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.960641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.960746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.960772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.960853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.960880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.960996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.961022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.961155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.961180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.961260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.961286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.961405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.961430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.961544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.961570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.961687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.961713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.961854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.961882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.961978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.962005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.962118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.962144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.962231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.962258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.962374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.962400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.657 qpair failed and we were unable to recover it. 00:28:19.657 [2024-11-19 10:56:06.962520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.657 [2024-11-19 10:56:06.962546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.962664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.962690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.962776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.962802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.962914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.962940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.963032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.963058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.963138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.963164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.963245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.963271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.963365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.963397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.963485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.963510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.963600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.963625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.963740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.963766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.963880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.963906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.963992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.964017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.964098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.964124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.964235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.964261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.964382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.964408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.964518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.964542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.964647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.964672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.964765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.964790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.964908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.964936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.965023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.965049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.965151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.965178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.965286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.965317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.965433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.965461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.965577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.965604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.965718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.965744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.965831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.965857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.965939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.965965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.966088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.966115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.658 [2024-11-19 10:56:06.966224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.658 [2024-11-19 10:56:06.966250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.658 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.966368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.966393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.966504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.966530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.966642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.966668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.966751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.966776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.966892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.966919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.967030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.967056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.967142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.967168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.967276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.967308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.967422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.967448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.967524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.967550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.967662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.967689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.967780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.967806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.967896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.967921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.968034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.968060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.968151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.968178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.968289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.968320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.968407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.968433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.968521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.968552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.968645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.968671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.968782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.968808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.968896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.968923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.969032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.969058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.969174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.969201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.969295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.969327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.969418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.969444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.969531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.969557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.969632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.969659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.969752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.659 [2024-11-19 10:56:06.969778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.659 qpair failed and we were unable to recover it. 00:28:19.659 [2024-11-19 10:56:06.969874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.969900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.970011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.970037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.970115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.970141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.970265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.970292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.970436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.970462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.970548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.970574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.970687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.970713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.970795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.970821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.970935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.970960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.971039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.971065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.971176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.971203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.971291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.971322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.971400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.971426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.971513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.971538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.971654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.971679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.971765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.971791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.971920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.971959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.972059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.972087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.972206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.972233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.972317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.972345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.972461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.972487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.972583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.972610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.972701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.972728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.972816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.972842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.972982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.973008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.973122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.973150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.973235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.973261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.973361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.973387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.973479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.660 [2024-11-19 10:56:06.973505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.660 qpair failed and we were unable to recover it. 00:28:19.660 [2024-11-19 10:56:06.973589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.973621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.973736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.973762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.973849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.973876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.973977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.974005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.974125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.974152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.974242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.974268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.974373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.974399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.974486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.974512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.974612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.974638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.974722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.974748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.974865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.974891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.975001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.975026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.975137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.975164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.975246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.975272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.975377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.975404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.975518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.975545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.975635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.975660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.975747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.975773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.975856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.975881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.975976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.976001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.976118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.976142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.976260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.976286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.976382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.976407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.976489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.976514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.661 [2024-11-19 10:56:06.976629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.661 [2024-11-19 10:56:06.976653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.661 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.976745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.976771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.976889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.976914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.977010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.977035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.977125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.977151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.977241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.977266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.977364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.977391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.977476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.977502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.977583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.977608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.977721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.977748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.977867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.977892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.977973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.977998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.978117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.978142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.978245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.978284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.978388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.978427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.978524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.978551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.978694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.978725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.978817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.978843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.978924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.978950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.979033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.979059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.979138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.979164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.979237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.979264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.979357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.979384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.979465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.979491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.979577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.979605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.979719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.979745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.979840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.979866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.979981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.980007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.662 [2024-11-19 10:56:06.980103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.662 [2024-11-19 10:56:06.980130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.662 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.980242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.980268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.980394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.980420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.980497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.980523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.980636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.980662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.980775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.980801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.980892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.980919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.981019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.981045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.981140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.981166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.981248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.981274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.981382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.981421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.981506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.981534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.981657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.981682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.981795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.981821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.981909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.981935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.982049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.982077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.982160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.982187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.982317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.982345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.982458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.982486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.982572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.982598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.982683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.982709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.982785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.982811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.982898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.982926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.983039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.983064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.983153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.983181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.983269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.983296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.983420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.983446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.983556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.983582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.983664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.983694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.663 [2024-11-19 10:56:06.983789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.663 [2024-11-19 10:56:06.983815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.663 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.983895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.983920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.984006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.984032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.984149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.984175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.984292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.984326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.984443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.984468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.984556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.984583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.984692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.984718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.984797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.984823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.984941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.984966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.985049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.985077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.985233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.985272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.985371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.985399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.985489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.985516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.985607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.985633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.985773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.985800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.985887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.985913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.986031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.986057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.986139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.986166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.986276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.986309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.986421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.986447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.986544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.986570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.986680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.986707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.986803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.986829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.986915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.986942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.987025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.987052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.987166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.987196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.987316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.987343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.987434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.987461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.987599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.987626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.987751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.664 [2024-11-19 10:56:06.987778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.664 qpair failed and we were unable to recover it. 00:28:19.664 [2024-11-19 10:56:06.987891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.987917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.988030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.988056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.988141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.988167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.988277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.988310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.988397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.988423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.988510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.988537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.988673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.988699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.988783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.988809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.988915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.988942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.989061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.989089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.989222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.989248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.989370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.989396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.989519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.989557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.989652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.989679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.989766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.989793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.989907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.989932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.990045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.990070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.990179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.990205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.990350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.990378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.990498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.990524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.990612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.990637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.990726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.990753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.990868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.990894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.990983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.991009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.991102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.991128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.991244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.991269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.991396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.991423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.991505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.991531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.991618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.991645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.991733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.665 [2024-11-19 10:56:06.991760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.665 qpair failed and we were unable to recover it. 00:28:19.665 [2024-11-19 10:56:06.991844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.991870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.991957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.991982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.992099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.992125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.992240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.992265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.992356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.992383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.992462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.992493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.992589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.992615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.992697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.992723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.992807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.992833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.992952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.992977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.993095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.993121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.993211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.993236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.993348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.993374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.993484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.993510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.993623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.993649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.993764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.993790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.993879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.993907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.993998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.994025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.994143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.994169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.994297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.994329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.994415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.994441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.994548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.994573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.994652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.994678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.994776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.994802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.994884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.994910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.995047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.995073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.995184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.995210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.995322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.995348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.666 [2024-11-19 10:56:06.995439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.666 [2024-11-19 10:56:06.995465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.666 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.995608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.995635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.995718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.995744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.995837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.995863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.995951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.995977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.996089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.996116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.996262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.996312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.996440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.996469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.996562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.996588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.996669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.996696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.996803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.996828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.996910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.996936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.997051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.997077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.997176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.997201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.997294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.997328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.997407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.997433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.997517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.997543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.997635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.997667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.997791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.997818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.997934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.997961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.998051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.998076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.998158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.998184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.998264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.998290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.998411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.998449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.998574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.998602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.998714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.998740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.998829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.998854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.667 [2024-11-19 10:56:06.998950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.667 [2024-11-19 10:56:06.998978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.667 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:06.999070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:06.999095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:06.999179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:06.999205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:06.999297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:06.999329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:06.999419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:06.999445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:06.999532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:06.999557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:06.999647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:06.999672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:06.999785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:06.999811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:06.999903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:06.999932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:07.000021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:07.000050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:07.000133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:07.000159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:07.000294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:07.000325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:07.000443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:07.000468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:07.000561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:07.000588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:07.000704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:07.000729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:07.000839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:07.000865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:07.000952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:07.000978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:07.001068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:07.001094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:07.001212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:07.001237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:07.001323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:07.001351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:07.001435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:07.001460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:07.001570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:07.001596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:07.001718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:07.001744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:07.001836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:07.001862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:07.001982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:07.002021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:07.002167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:07.002195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:07.002286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:07.002319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:07.002407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:07.002433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:07.002518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:07.002544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:07.002636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:07.002663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.668 [2024-11-19 10:56:07.002755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.668 [2024-11-19 10:56:07.002787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.668 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.002879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.002906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.002989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.003015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.003125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.003151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.003231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.003258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.003355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.003395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.003491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.003519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.003630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.003657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.003738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.003764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.003879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.003905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.003984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.004009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.004121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.004149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.004243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.004272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.004368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.004395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.004516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.004543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.004625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.004651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.004744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.004770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.004860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.004888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.005023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.005063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.005159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.005189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.005269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.005296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.005392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.005420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.005514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.005542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.005627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.005654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.005745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.005773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.005859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.005886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.005995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.006022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.006126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.006153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.006269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.006295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.006399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.006425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.006514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.006540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.006643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.669 [2024-11-19 10:56:07.006669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.669 qpair failed and we were unable to recover it. 00:28:19.669 [2024-11-19 10:56:07.006757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.006784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.006871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.006897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.006979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.007006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.007118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.007144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.007230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.007256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.007347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.007374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.007463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.007490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.007573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.007599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.007688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.007719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.007828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.007855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.007939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.007965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.008081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.008108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.008235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.008275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.008382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.008421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.008541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.008569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.008662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.008688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.008789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.008815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.008896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.008921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.009035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.009063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.009170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.009209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.009341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.009368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.009492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.009518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.009672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.009698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.009786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.009812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.009901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.009926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.010022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.010050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.010138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.010164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.010279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.010323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.010404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.010430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.010512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.010537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.010623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.010650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.010767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.010794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.670 [2024-11-19 10:56:07.010883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.670 [2024-11-19 10:56:07.010909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.670 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.011026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.011054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.011166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.011192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.011298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.011344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.011447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.011474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.011561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.011586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.011695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.011721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.011813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.011841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.011955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.011980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.012121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.012146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.012224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.012250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.012344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.012373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.012492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.012518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.012596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.012623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.012712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.012738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.012844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.012869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.012968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.012994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.013084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.013111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.013201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.013229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.013355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.013385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.013503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.013530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.013619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.013645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.013753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.013779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.013864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.013889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.013986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.014024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.014137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.014164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.014285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.014320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.014411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.014437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.014554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.014580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.014659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.014684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.014776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.014803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.014927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.014956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.671 [2024-11-19 10:56:07.015042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.671 [2024-11-19 10:56:07.015069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.671 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.015158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.015184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.015269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.015296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.015392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.015419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.015531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.015558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.015670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.015697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.015806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.015832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.015924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.015951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.016046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.016073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.016159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.016184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.016269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.016294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.016381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.016413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.016525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.016550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.016641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.016669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.016781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.016808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.016891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.016917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.017004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.017030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.017139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.017167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.017272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.017318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.017410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.017437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.017555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.017581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.017661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.017687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.017803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.017831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.017942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.017968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.018051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.018076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.018158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.018184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.018272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.018297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.018400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.018426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.018511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.018536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.018651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.018676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.018759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.018784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.672 [2024-11-19 10:56:07.018889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.672 [2024-11-19 10:56:07.018914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.672 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.018999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.019027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.019121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.019149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.019245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.019273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.019368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.019394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.019512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.019538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.019626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.019652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.019729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.019759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.019848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.019876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.019967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.019995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.020106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.020133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.020244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.020269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.020362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.020388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.020474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.020500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.020577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.020602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.020714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.020740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.020826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.020852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.020958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.020983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.021107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.021147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.021259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.021287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.021422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.021448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.021552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.021578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.021714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.021740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.021826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.021851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.021941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.021968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.022056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.022082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.022167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.022192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.022323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.022350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.022435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.022461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.022610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.022635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.022750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.022775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.022867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.022892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.022969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.022994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.023102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.023128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.023239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.023271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.023381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.673 [2024-11-19 10:56:07.023420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.673 qpair failed and we were unable to recover it. 00:28:19.673 [2024-11-19 10:56:07.023511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.023537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.023650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.023677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.023808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.023834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.023945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.023971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.024054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.024081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.024157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.024184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.024270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.024296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.024393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.024419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.024507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.024533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.024644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.024671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.024760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.024787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.024866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.024893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.025037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.025062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.025167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.025207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.025306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.025333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.025422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.025447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.025525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.025550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.025638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.025663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.025775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.025800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.025915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.025943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.026059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.026085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.026199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.026226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.026321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.026348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.026463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.026488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.026571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.026596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.026682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.026710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.026801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.026826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.026905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.026930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.027041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.027067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.027145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.027170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.027256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.027281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.027380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.027408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.027527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.027553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.027662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.027687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.027769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.027795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.027912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.027938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.028051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.028090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.028188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.674 [2024-11-19 10:56:07.028215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.674 qpair failed and we were unable to recover it. 00:28:19.674 [2024-11-19 10:56:07.028346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.028386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.028487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.028515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.028608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.028635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.028711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.028738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.028826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.028852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.028940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.028970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.029084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.029111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.029199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.029226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.029311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.029337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.029433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.029459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.029540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.029565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.029682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.029707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.029792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.029817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.029930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.029955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.030071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.030097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.030212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.030237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.030355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.030381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.030500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.030525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.030601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.030625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.030742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.030768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.030857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.030882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.030980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.031020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.031136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.031163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.031253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.031279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.031371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.031399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.031488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.031514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.031624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.031650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.031731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.031761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.031848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.031873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.031949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.031975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.032059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.032084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.032198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.032224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.032298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.032333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.032420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.032446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.032533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.032558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.032647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.032673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.032756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.675 [2024-11-19 10:56:07.032782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.675 qpair failed and we were unable to recover it. 00:28:19.675 [2024-11-19 10:56:07.032894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.032920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.033027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.033053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.033166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.033195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.033294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.033339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.033475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.033515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.033634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.033662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.033799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.033826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.033939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.033966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.034050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.034076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.034207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.034246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.034371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.034400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.034488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.034514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.034600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.034626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.034708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.034733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.034841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.034867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.034981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.035009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.035121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.035147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.035263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.035288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.035405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.035430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.035521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.035548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.035658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.035683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.035821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.035846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.035943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.035969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.036069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.036109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.036191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.036219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.036323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.036351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.036438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.036466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.036585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.036611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.036697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.036723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.036833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.036859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.036971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.037003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.037084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.037111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.037196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.037222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.037297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.037331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.037415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.037441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.037559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.037586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.037673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.037699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.037855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.676 [2024-11-19 10:56:07.037881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.676 qpair failed and we were unable to recover it. 00:28:19.676 [2024-11-19 10:56:07.037975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.038003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.038086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.038113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.038230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.038256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.038345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.038373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.038461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.038486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.038574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.038600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.038723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.038751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.038873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.038899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.038980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.039008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.039127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.039153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.039241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.039267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.039367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.039394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.039479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.039506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.039594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.039620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.039728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.039755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.039866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.039895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.040009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.040034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.040108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.040135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.040246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.040273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.040381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.040420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.040521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.040549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.040661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.040686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.040765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.040791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.040885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.040910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.041005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.041032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.041126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.041153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.041269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.041295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.041381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.041407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.041491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.041517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.041598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.041625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.041702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.041728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.041826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.041854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.677 qpair failed and we were unable to recover it. 00:28:19.677 [2024-11-19 10:56:07.041969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.677 [2024-11-19 10:56:07.041999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.042094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.042122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.042236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.042263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.042371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.042410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.042502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.042528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.042623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.042649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.042734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.042760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.042860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.042888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.042978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.043005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.043121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.043147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.043226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.043252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.043338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.043365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.043452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.043478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.043564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.043592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.043688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.043716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.043799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.043827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.043943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.043969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.044089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.044117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.044203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.044229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.044331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.044370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.044468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.044496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.044573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.044599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.044724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.044749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.044840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.044868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.044963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.044991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.045105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.045131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.045215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.045241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.045337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.045365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.045474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.045500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.045584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.045609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.045716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.045742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.045855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.045880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.045973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.046000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.046084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.678 [2024-11-19 10:56:07.046110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.678 qpair failed and we were unable to recover it. 00:28:19.678 [2024-11-19 10:56:07.046189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.046214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.046349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.046375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.046460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.046486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.046580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.046606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.046719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.046745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.046828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.046853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.046962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.046987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.047081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.047109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.047226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.047252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.047358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.047397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.047482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.047510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.047601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.047629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.047749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.047774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.047860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.047886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.048007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.048033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.048159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.048198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.048279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.048315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.048444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.048483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.048573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.048602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.048684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.048710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.048833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.048861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.048976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.049002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.049093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.049120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.049209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.049235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.049353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.049380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.049510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.679 [2024-11-19 10:56:07.049536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.679 qpair failed and we were unable to recover it. 00:28:19.679 [2024-11-19 10:56:07.049628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.049655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.049738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.049765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.049850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.049878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.049996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.050022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.050128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.050168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.050294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.050338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.050474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.050513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.050624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.050657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.050753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.050781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.050893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.050918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.051007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.051034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.051135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.051176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.051287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.051333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.051429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.051457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.051576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.051602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.051691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.051716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.051830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.051857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.051965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.051991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.052080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.052105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.052221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.052246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.052350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.052377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.052460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.052485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.052576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.052601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.052686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.052713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.052796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.052821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.052904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.052930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.053044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.053070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.053154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.053180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.053294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.053330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.053420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.053447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.053526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.053551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.680 [2024-11-19 10:56:07.053663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.680 [2024-11-19 10:56:07.053689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.680 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.053772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.053798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.053871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.053896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.053978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.054008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.054102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.054131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.054267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.054293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.054392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.054421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.054517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.054543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.054660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.054686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.054775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.054802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.054915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.054942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.055065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.055104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.055194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.055221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.055315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.055343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.055425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.055451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.055562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.055589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.055702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.055728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.055824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.055850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.055936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.055961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.056083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.056111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.056206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.056232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.056348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.056375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.056502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.056528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.056638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.056664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.056780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.056806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.056915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.056941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.057040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.057068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.057156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.057182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.057264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.057290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.057394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.057419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.057522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.057561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.681 [2024-11-19 10:56:07.057695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.681 [2024-11-19 10:56:07.057722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.681 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.057837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.057863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.057947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.057973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.058063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.058102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.058220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.058247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.058326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.058353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.058448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.058475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.058565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.058591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.058686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.058712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.058901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.058928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.059022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.059051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.059131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.059157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.059250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.059282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.059395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.059421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.059505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.059530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.059645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.059670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.059782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.059807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.059894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.059919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.059999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.060024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.060115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.060142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.060252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.060278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.060482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.060509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.060594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.060620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.060734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.060760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.060872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.060897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.060983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.061009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.061113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.061152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.682 [2024-11-19 10:56:07.061271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.682 [2024-11-19 10:56:07.061298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.682 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.061390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.061416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.061531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.061557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.061644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.061669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.061784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.061810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.061907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.061934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.062015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.062041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.062134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.062159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.062255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.062282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.062392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.062430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.062527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.062554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.062637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.062663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.062752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.062784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.062889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.062915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.062999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.063024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.063139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.063164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.063378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.063418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.063516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.063544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.063629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.063656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.063740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.063767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.063851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.063878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.063984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.064023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.064141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.064168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.064256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.064282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.064376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.064403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.064488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.064515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.064596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.064623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.064716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.064743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.064936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.064962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.065070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.065096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.065224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.065263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.683 [2024-11-19 10:56:07.065363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.683 [2024-11-19 10:56:07.065391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.683 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.065480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.065506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.065588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.065614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.065701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.065727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.065814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.065840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.065925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.065951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.066051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.066090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.066192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.066232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.066336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.066370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.066467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.066493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.066580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.066605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.066689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.066715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.066828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.066854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.066970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.066996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.067071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.067096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.067189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.067216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.067329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.067355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.067445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.067472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.067561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.067586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.067676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.067701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.067808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.067834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.067917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.067944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.068036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.068062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.068166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.068191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.068313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.068342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.068426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.068451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.068530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.068556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.068647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.068672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.068780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.068806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.068917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.068943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.069067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.069093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.069181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.684 [2024-11-19 10:56:07.069206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.684 qpair failed and we were unable to recover it. 00:28:19.684 [2024-11-19 10:56:07.069288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.069320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.069406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.069431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.069527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.069552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.069645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.069671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.069756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.069784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.069871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.069897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.069981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.070007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.070101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.070126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.070234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.070260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.070358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.070385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.070475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.070502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.070585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.070611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.070723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.070749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.070841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.070868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.070949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.070974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.071079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.071105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.071190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.071220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.071308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.071336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.071422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.071448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.071561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.071586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.071671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.071696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.071805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.071830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.071916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.071942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.072029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.072054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.072133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.072159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.072241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.072267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.072362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.072387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.072471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.072497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.072609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.072634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.072714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.072740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.072852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.685 [2024-11-19 10:56:07.072877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.685 qpair failed and we were unable to recover it. 00:28:19.685 [2024-11-19 10:56:07.072965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.072990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.073078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.073103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.073180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.073205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.073291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.073322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.073412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.073439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.073523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.073548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.073662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.073687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.073772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.073798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.073882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.073907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.073991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.074017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.074109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.074134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.074225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.074250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.074348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.074374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.074469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.074494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.074577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.074603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.074715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.074740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.074852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.074877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.074966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.074991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.075082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.075108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.075192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.075218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.075315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.075342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.075461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.075486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.075626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.075653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.075740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.075765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.075893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.075918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.076010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.076039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.076134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.076160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.076253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.076279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.076373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.076399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.076490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.076516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.686 [2024-11-19 10:56:07.076604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.686 [2024-11-19 10:56:07.076630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.686 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.076730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.076756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.076900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.076926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.077020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.077045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.077165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.077190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.077278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.077311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.077406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.077431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.077515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.077540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.077654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.077679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.077772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.077797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.077917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.077943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.078056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.078081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.078177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.078216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.078315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.078343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.078461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.078487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.078571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.078597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.078715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.078741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.078827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.078853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.078972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.078997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.079104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.079129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.079245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.079271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.079375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.079402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.079494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.079520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.079626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.079652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.079739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.079765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.079855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.079882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.079967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.079993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.080124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.080164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.080263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.080289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.080398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.080425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.080505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.080531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.080621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.687 [2024-11-19 10:56:07.080646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.687 qpair failed and we were unable to recover it. 00:28:19.687 [2024-11-19 10:56:07.080738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.080766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.080882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.080908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.081001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.081026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.081121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.081152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.081242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.081268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.081367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.081393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.081510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.081535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.081628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.081653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.081764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.081789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.081879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.081904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.081992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.082020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.082142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.082167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.082256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.082282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.082381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.082407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.082497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.082523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.082637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.082664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.082775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.082800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.082920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.082945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.083025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.083050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.083131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.083156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.083238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.083263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.083361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.083389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.083500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.083525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.083635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.083661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.083750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.083777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.083885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.688 [2024-11-19 10:56:07.083911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.688 qpair failed and we were unable to recover it. 00:28:19.688 [2024-11-19 10:56:07.083996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.084021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.084109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.084134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.084213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.084239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.084329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.084356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.084471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.084498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.084587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.084613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.084722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.084748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.084856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.084882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.084964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.084990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.085082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.085108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.085194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.085220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.085327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.085354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.085469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.085495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.085576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.085601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.085716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.085742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.085870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.085895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.086009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.086034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.086118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.086152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.086268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.086293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.086411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.086437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.086529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.086555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.086639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.086664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.086779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.086805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.086895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.086921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.087012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.087038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.087128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.087153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.087227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.087252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.087344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.087370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.087482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.087508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.087596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.087622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.689 [2024-11-19 10:56:07.087715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.689 [2024-11-19 10:56:07.087740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.689 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.087834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.087861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.087980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.088005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.088142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.088168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.088258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.088284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.088383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.088410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.088502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.088527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.088614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.088639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.088755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.088781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.088894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.088919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.089004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.089029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.089175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.089200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.089285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.089317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.089412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.089438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.089554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.089593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.089713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.089741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.089838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.089864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.089945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.089971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.090089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.090117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.090202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.090228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.090317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.090345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.090432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.090458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.090547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.090573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.090662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.090688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.090793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.090818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.090934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.090960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.091082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.091111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.091194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.091225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.091324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.091351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.091444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.091469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.091581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.690 [2024-11-19 10:56:07.091607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.690 qpair failed and we were unable to recover it. 00:28:19.690 [2024-11-19 10:56:07.091693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.091718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.091825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.091852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.091941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.091967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.092044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.092069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.092157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.092183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.092276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.092308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.092425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.092451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.092541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.092567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.092686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.092712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.092796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.092823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.092946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.092975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.093068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.093093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.093220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.093246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.093344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.093370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.093480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.093506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.093596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.093623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.093706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.093732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.093830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.093856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.093976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.094001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.094111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.094138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.094252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.094277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.094375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.094400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.094509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.094535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.094625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.094650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.094759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.094784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.094886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.094911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.094994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.095020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.095107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.095132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.095249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.095277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.095375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.095401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.691 qpair failed and we were unable to recover it. 00:28:19.691 [2024-11-19 10:56:07.095483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.691 [2024-11-19 10:56:07.095509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.095600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.095626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.095718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.095744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.095826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.095851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.095931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.095957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.096042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.096068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.096163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.096192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.096286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.096316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.096406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.096431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.096539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.096564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.096646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.096672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.096763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.096789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.096861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.096887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.096980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.097005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.097091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.097117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.097212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.097237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.097358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.097386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.097472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.097498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.097583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.097610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.097696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.097722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.097817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.097844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.097971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.097996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.098077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.098103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.098194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.098220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.098307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.098333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.098422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.098448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.098525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.098551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.098634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.098660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.098754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.098779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.098917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.098943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.099019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.099044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.692 [2024-11-19 10:56:07.099161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.692 [2024-11-19 10:56:07.099188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.692 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.099278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.099314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.099416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.099441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.099532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.099557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.099673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.099699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.099784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.099809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.099929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.099957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.100046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.100072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.100157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.100183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.100297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.100329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.100422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.100448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.100532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.100559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.100679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.100705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.100814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.100839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.100925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.100951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.101032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.101062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.101201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.101226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.101316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.101342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.101458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.101483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.101565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.101590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.101672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.101698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.101806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.101832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.101918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.101944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.102034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.102059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.102143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.102171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.102259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.102284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.102397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.102424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.102503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.102529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.102646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.102671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.102772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.102799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.693 qpair failed and we were unable to recover it. 00:28:19.693 [2024-11-19 10:56:07.102878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.693 [2024-11-19 10:56:07.102903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.103011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.103037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.103118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.103142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.103222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.103248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.103375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.103401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.103522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.103547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.103652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.103678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.103798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.103824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.103915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.103940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.104065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.104091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.104175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.104200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.104311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.104337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.104442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.104481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.104601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.104629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.104717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.104743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.104889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.104916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.105022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.105063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.105163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.105196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.105370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.105397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.105504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.105530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.105621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.105646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.105761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.105787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.105864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.105890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.105974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.106000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.106081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.106108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.106202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.106233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.106348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.106374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.106462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.106488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.694 [2024-11-19 10:56:07.106574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.694 [2024-11-19 10:56:07.106600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.694 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.106696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.106722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.106858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.106884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.106979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.107005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.107108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.107135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.107244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.107270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.107363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.107390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.107472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.107498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.107603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.107632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.107741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.107774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.107904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.107929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.108030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.108055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.108148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.108173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.108260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.108286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.108421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.108447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.108570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.108595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.108705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.108731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.108828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.108853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.108982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.109007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.109093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.109119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.109204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.109230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.109320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.109348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.109436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.109462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.109541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.109566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.109661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.109687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.109805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.109830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.109944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.109970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.110094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.110119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.110211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.110237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.110358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.110384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.110507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.110533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.110622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.695 [2024-11-19 10:56:07.110649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.695 qpair failed and we were unable to recover it. 00:28:19.695 [2024-11-19 10:56:07.110736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.110762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.110847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.110872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.110962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.110988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.111071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.111097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.111175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.111200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.111327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.111358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.111467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.111493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.111600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.111626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.111716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.111763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.111876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.111909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.112044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.112069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.112165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.112190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.112295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.112355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.112488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.112526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.112636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.112669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.112779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.112814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.112961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.112986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.113068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.113093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.113187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.113214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.113318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.113349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.113440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.113465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.113547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.113575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.113720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.113746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.113829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.113855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.113974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.113999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.114085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.114111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.114198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.114223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.114325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.114353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.114447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.114473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.114562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.114587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.114670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.696 [2024-11-19 10:56:07.114696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.696 qpair failed and we were unable to recover it. 00:28:19.696 [2024-11-19 10:56:07.114808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.114834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.114936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.114962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.115051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.115079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.115173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.115199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.115324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.115352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.115439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.115465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.115552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.115579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.115694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.115719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.115797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.115822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.115912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.115938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.116043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.116069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.116159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.116185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.116263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.116288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.116397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.116423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.116537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.116568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.116654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.116679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.116765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.116791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.116911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.116936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.117022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.117047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.117158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.117182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.117267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.117294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.117388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.117413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.117525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.117550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.117692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.117728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.117847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.117880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.117994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.118026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.118160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.118188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.118274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.118300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.118397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.118423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.118512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.118538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.118634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.697 [2024-11-19 10:56:07.118659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.697 qpair failed and we were unable to recover it. 00:28:19.697 [2024-11-19 10:56:07.118772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.118797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.118883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.118909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.118989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.119015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.119126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.119151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.119258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.119283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.119398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.119429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.119574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.119600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.119719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.119745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.119840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.119866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.119955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.119980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.120078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.120104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.120201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.120229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.120328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.120365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.120445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.120471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.120587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.120613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.120707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.120734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.120820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.120845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.120933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.120958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.121071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.121096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.121179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.121204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.121306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.121333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.121416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.121442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.121523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.121548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.121658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.121684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.121784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.121810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.121920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.121945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.122024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.122049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.122137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.122164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.122248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.122273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.122391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.122416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.698 [2024-11-19 10:56:07.122558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.698 [2024-11-19 10:56:07.122584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.698 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.122667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.122693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.122783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.122808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.122919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.122944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.123056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.123082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.123178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.123204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.123283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.123314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.123411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.123437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.123517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.123543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.123643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.123668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.123759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.123784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.123871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.123896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.124012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.124037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.124127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.124152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.124256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.124281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.124385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.124410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.124502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.124528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.124603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.124628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.124714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.124740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.124843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.124868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.124992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.125031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.125164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.125198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.125331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.125374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.125457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.125483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.125635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.125669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.125787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.125820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.125931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.699 [2024-11-19 10:56:07.125956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-11-19 10:56:07.126037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.126062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.126177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.126212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.126338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.126372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.126479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.126512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.126690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.126723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.126825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.126858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.126982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.127014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.127120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.127146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.127273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.127322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.127490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.127526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.127657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.127691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.127797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.127831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.127970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.128005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.128155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.128180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.128267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.128293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.128392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.128417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.128507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.128533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.128642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.128667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.128739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.128765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.128877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.128902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.129005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.129033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.129127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.129152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.129243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.129268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.129374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.129400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.129493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.129519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.129601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.129627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.129763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.129788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.129871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.129896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.129978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.130004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.130124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.130152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-11-19 10:56:07.130280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.700 [2024-11-19 10:56:07.130327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.130435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.130461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.130536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.130561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.130644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.130673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.130759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.130801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.130968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.131002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.131147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.131182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.131291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.131352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.131435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.131461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.131549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.131574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.131651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.131676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.131788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.131814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.131904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.131931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.132046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.132071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.132173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.132201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.132289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.132324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.132412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.132438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.132523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.132549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.132674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.132707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.132821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.132854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.132960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.132993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.133119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.133152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.133264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.133296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.133449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.133482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.133630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.133656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.133739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.133764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.133855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.133881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.133958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.133983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.134071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.134097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.134187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.134216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.134314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.701 [2024-11-19 10:56:07.134340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-11-19 10:56:07.134427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.134453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.134533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.134558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.134642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.134668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.134753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.134779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.134932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.134966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.135102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.135135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.135248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.135275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.135374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.135425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.135544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.135577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.135681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.135714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.135825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.135850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.135961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.135986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.136071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.136100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.136190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.136215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.136295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.136325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.136409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.136434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.136520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.136545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.136653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.136678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.136759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.136784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.136892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.136917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.137009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.137034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.137118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.137143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.137223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.137250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.137340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.137365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.137478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.137504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.137588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.137613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.137711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.137736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.137842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.137867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.137949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.137974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.138054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.138080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.138160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.702 [2024-11-19 10:56:07.138185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.702 qpair failed and we were unable to recover it. 00:28:19.702 [2024-11-19 10:56:07.138272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.138297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.138417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.138443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.138527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.138554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.138641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.138667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.138775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.138800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.138896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.138945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.139052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.139086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.139199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.139232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.139358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.139393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.139513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.139545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.139709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.139742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.139854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.139880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.139964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.139990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.140081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.140106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.140218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.140243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.140371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.140398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.140486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.140512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.140629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.140654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.140753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.140779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.140872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.140897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.140993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.141018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.141106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.141136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.141228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.141255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.141352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.141378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.141462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.141488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.141570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.141595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.141677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.141702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.141784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.141810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.141901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.141926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.142038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.703 [2024-11-19 10:56:07.142063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.703 qpair failed and we were unable to recover it. 00:28:19.703 [2024-11-19 10:56:07.142159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.142184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.142284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.142327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.142474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.142507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.142681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.142713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.142844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.142869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.142980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.143007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.143162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.143196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.143316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.143352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.143469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.143503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.143617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.143650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.143752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.143785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.143899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.143943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.144037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.144063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.144150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.144175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.144316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.144349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.144459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.144493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.144661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.144694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.144811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.144843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.144993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.145026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.145140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.145166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.145287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.145321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.145412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.145437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.145525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.145550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.145639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.145664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.145760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.145786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.145881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.145906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.145992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.146019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.146150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.146189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.146285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.146353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.146463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.146498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.146662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.146695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.704 qpair failed and we were unable to recover it. 00:28:19.704 [2024-11-19 10:56:07.146796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.704 [2024-11-19 10:56:07.146836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.146977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.147010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.147145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.147179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.147346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.147371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.147457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.147482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.147573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.147599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.147681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.147707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.147788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.147814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.147913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.147939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.148051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.148077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.148178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.148203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.148319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.148345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.148429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.148454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.148547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.148572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.148663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.148690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.148767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.148814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.148954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.148980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.149094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.149123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.149201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.149226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.149340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.149367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.149481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.149506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.149610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.149635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.149711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.149736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.149809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.149835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.149929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.149955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.150037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.150062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.150162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.150189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.705 qpair failed and we were unable to recover it. 00:28:19.705 [2024-11-19 10:56:07.150286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.705 [2024-11-19 10:56:07.150317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.150403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.150429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.150563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.150595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.150700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.150733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.150907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.150940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.151053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.151090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.151230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.151263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.151371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.151404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.151556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.151582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.151665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.151690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.151764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.151790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.151904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.151930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.152042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.152067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.152177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.152207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.152294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.152329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.152416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.152442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.152549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.152589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.152685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.152714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.152830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.152856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.152945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.152971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.153065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.153091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.153182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.153210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.153298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.153329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.153424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.153450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.153538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.153563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.153643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.153669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.153785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.153814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.153938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.153964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.154049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.154078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.154173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.154199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.154281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.154314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.706 [2024-11-19 10:56:07.154424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.706 [2024-11-19 10:56:07.154451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.706 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.154537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.154563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.154648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.154674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.154766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.154792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.154876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.154902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.155012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.155038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.155129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.155155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.155239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.155266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.155387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.155422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.155531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.155564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.155674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.155708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.155830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.155874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.155963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.155988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.156082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.156109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.156317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.156344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.156441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.156467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.156560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.156587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.156704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.156730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.156849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.156875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.156963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.156990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.157100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.157133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.157243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.157281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.157420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.157450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.157554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.157592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.157693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.157720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.157805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.157848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.157935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.157961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.158083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.158111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.158204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.158249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.158336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.158363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.158496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.158523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.158604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.707 [2024-11-19 10:56:07.158630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.707 qpair failed and we were unable to recover it. 00:28:19.707 [2024-11-19 10:56:07.158731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.708 [2024-11-19 10:56:07.158759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.708 qpair failed and we were unable to recover it. 00:28:19.708 [2024-11-19 10:56:07.158848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.708 [2024-11-19 10:56:07.158890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.708 qpair failed and we were unable to recover it. 00:28:19.708 [2024-11-19 10:56:07.158986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.708 [2024-11-19 10:56:07.159014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.708 qpair failed and we were unable to recover it. 00:28:19.708 [2024-11-19 10:56:07.159126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.708 [2024-11-19 10:56:07.159160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.708 qpair failed and we were unable to recover it. 00:28:19.708 [2024-11-19 10:56:07.159316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.708 [2024-11-19 10:56:07.159343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.708 qpair failed and we were unable to recover it. 00:28:19.708 [2024-11-19 10:56:07.159423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.708 [2024-11-19 10:56:07.159449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.708 qpair failed and we were unable to recover it. 00:28:19.708 [2024-11-19 10:56:07.159542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.708 [2024-11-19 10:56:07.159570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.708 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.159667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.159693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.159784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.159813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.159951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.159980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.160106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.160133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.160226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.160253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.160340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.160368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.160454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.160480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.160594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.160621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.160734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.160761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.160844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.160871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.160967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.160996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.161080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.161108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.161196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.161223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.161316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.161343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.161439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.161465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.161552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.161579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.161661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.161688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.161780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.161806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.161926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.161951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.162050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.162079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.162195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.162222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.162321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.162348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.162440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.162467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.162579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.162626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.162716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.996 [2024-11-19 10:56:07.162742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.996 qpair failed and we were unable to recover it. 00:28:19.996 [2024-11-19 10:56:07.162829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.162855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.162936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.162962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.163047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.163090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.163176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.163202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.163360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.163386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.163474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.163499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.163594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.163622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.163711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.163737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.163819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.163845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.163935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.163961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.164050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.164076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.164207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.164241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.164377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.164405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.164531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.164557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.164663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.164697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.164834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.164867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.164974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.165007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.165173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.165206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.165337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.165365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.165453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.165479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.165560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.165586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.165672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.165699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.165809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.165835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.165915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.165941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.166033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.166060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.166163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.166204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.166300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.166341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.166459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.166486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.166607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.997 [2024-11-19 10:56:07.166635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.997 qpair failed and we were unable to recover it. 00:28:19.997 [2024-11-19 10:56:07.166746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.166787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.166889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.166919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.167015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.167042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.167153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.167214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.167379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.167407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.167503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.167529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.167627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.167654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.167744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.167771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.167935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.167963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.168054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.168087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.168179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.168227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.168333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.168367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.168474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.168500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.168613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.168646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.168786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.168819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.168955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.168988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.169141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.169169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.169285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.169319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.169439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.169471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.169575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.169609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.169717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.169754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.169868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.169903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.170009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.170043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.170190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.170224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.170343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.170375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.170508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.170536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.170624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.170653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.170746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.170776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.170879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.998 [2024-11-19 10:56:07.170908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.998 qpair failed and we were unable to recover it. 00:28:19.998 [2024-11-19 10:56:07.171011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.171041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.171156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.171184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.171287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.171345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.171449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.171478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.171573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.171601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.171721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.171750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.171849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.171877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.171976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.172004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.172136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.172164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.172258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.172286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.172386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.172414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.172586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.172634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.172738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.172766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.172864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.172891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.173017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.173046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.173141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.173173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.173314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.173345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.173440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.173470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.173565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.173594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.173719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.173748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.173848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.173883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.173988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.174018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.174113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.174140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.174254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.174287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.174475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.174504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.174593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.174621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.174746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.174792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.174895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.174924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.175030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.999 [2024-11-19 10:56:07.175061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:19.999 qpair failed and we were unable to recover it. 00:28:19.999 [2024-11-19 10:56:07.175152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.175182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.175311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.175341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.175445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.175475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.175594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.175624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.175735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.175764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.175871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.175900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.176007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.176037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.176180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.176226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.176334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.176368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.176469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.176499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.176597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.176627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.176755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.176785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.176889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.176921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.177028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.177059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.177171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.177201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.177330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.177361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.177465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.177495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.177593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.177623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.177746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.177783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.177906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.177939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.178033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.178063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.178188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.178238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.178382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.178415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.178519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.178549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.178644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.178674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.178811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.178845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.178953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.178988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.179100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.179133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.179277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.000 [2024-11-19 10:56:07.179322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.000 qpair failed and we were unable to recover it. 00:28:20.000 [2024-11-19 10:56:07.179480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.179509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.179604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.179636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.179755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.179785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.179896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.179926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.180020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.180050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.180150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.180180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.180277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.180312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.180469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.180499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.180631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.180661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.180753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.180783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.180908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.180939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.181040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.181070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.181165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.181195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.181345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.181379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.181532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.181562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.181698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.181728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.181835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.181865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.181996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.182031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.182187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.182223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.182342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.182378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.182519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.182553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.182696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.182729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.182894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.182929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.183056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.183085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.183193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.183224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.183355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.183391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.183519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.183554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.001 [2024-11-19 10:56:07.183772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.001 [2024-11-19 10:56:07.183807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.001 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.183923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.183960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.184094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.184135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.184272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.184322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.184454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.184484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.184588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.184619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.184724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.184754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.184950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.184980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.185102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.185132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.185222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.185252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.185464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.185496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.185614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.185661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.185767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.185819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.185933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.185970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.186125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.186160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.186329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.186363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.186497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.186528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.186705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.186736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.186830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.186861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.186992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.187023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.187121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.187152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.187256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.187287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.187401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.187432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.187532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.187562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.187683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.187714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.187810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.187842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.187995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.002 [2024-11-19 10:56:07.188026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.002 qpair failed and we were unable to recover it. 00:28:20.002 [2024-11-19 10:56:07.188169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.188203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.188364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.188396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.188533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.188563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.188656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.188687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.188821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.188856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.188998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.189045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.189150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.189180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.189274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.189314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.189420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.189450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.189627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.189658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.189761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.189793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.189922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.189954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.190052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.190085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.190210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.190259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.190394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.190430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.190553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.190592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.190730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.190762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.190894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.190925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.191083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.191117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.191233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.191269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.191410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.191443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.191554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.191587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.191729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.191763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.191881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.191913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.192045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.192078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.192210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.192242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.192431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.192480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.192587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.192622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.003 [2024-11-19 10:56:07.192730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.003 [2024-11-19 10:56:07.192763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.003 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.192908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.192941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.193074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.193107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.193205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.193237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.193374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.193409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.193521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.193553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.193706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.193742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.193860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.193898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.194019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.194055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.194171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.194209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.194383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.194420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.194562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.194595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.194738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.194771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.194873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.194906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.195055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.195088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.195200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.195234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.195377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.195411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.195554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.195586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.195702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.195737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.195948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.195981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.196096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.196128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.196241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.196276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.196420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.196447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.196540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.196567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.196653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.196679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.196767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.196794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.196909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.196936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.197030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.197061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.197151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.197179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.004 [2024-11-19 10:56:07.197269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.004 [2024-11-19 10:56:07.197295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.004 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.197411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.197449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.197557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.197596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.197692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.197721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.197856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.197883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.198000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.198027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.198136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.198162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.198257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.198281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.198379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.198409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.198524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.198550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.198638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.198665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.198777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.198803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.198903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.198929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.199033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.199067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.199167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.199200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.199310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.199358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.199443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.199468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.199552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.199578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.199741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.199767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.199900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.199933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.200039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.200073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.200186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.200220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.200356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.200382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.200498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.200524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.200634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.200669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.200832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.200865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.200981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.201014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.201129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.201162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.201274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.201315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.005 [2024-11-19 10:56:07.201423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.005 [2024-11-19 10:56:07.201448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.005 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.201540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.201565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.201655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.201680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.201800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.201826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.201913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.201960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.202095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.202128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.202245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.202293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.202386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.202412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.202504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.202530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.202622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.202652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.202788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.202820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.202928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.202963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.203108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.203141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.203248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.203281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.203399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.203424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.203512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.203538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.203657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.203690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.203799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.203832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.203976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.204002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.204132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.204165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.204295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.204365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.204492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.204531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.204653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.204688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.204818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.204852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.204963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.204999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.205167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.205200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.205323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.205368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.205484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.205510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.205621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.205653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.205764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.006 [2024-11-19 10:56:07.205799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.006 qpair failed and we were unable to recover it. 00:28:20.006 [2024-11-19 10:56:07.205974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.206008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.206174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.206199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.206333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.206376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.206469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.206496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.206612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.206647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.206791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.206823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.206972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.207006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.207117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.207150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.207288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.207353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.207448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.207473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.207559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.207584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.207746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.207779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.207914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.207948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.208070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.208103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.208219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.208251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.208401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.208428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.208518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.208544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.208629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.208654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.208739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.208764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.208902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.208932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.209043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.209078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.209240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.209292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.209422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.209450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.209541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.209567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.209685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.209711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.209796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.209822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.007 qpair failed and we were unable to recover it. 00:28:20.007 [2024-11-19 10:56:07.209930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.007 [2024-11-19 10:56:07.209957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.210039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.210065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.210195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.210230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.210383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.210410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.210492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.210518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.210643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.210690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.210802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.210837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.210990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.211023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.211124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.211159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.211260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.211295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.211414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.211442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.211529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.211555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.211643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.211671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.211759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.211809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.211956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.211990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.212088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.212122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.212338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.212384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.212496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.212523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.212606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.212634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.212760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.212794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.212942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.212993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.213116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.213175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.213326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.213369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.213474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.213501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.213578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.213625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.213758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.213793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.213954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.213981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.214140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.214176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.008 qpair failed and we were unable to recover it. 00:28:20.008 [2024-11-19 10:56:07.214300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.008 [2024-11-19 10:56:07.214352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.214444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.214471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.214604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.214639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.214754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.214789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.214906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.214941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.215075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.215117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.215230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.215267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.215436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.215463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.215541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.215567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.215655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.215704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.215818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.215853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.215995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.216040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.216157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.216184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.216301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.216336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.216448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.216476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.216562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.216587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.216676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.216702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.216859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.216885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.216960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.216986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.217098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.217124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.217276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.217321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.217480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.217513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.217633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.217669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.217780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.217815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.217957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.217992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.218098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.218133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.218276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.218324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.218468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.218520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.218705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.218740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.009 qpair failed and we were unable to recover it. 00:28:20.009 [2024-11-19 10:56:07.218886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.009 [2024-11-19 10:56:07.218938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.219086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.219122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.219276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.219312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.219408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.219434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.219550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.219582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.219788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.219814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.219904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.219929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.220089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.220117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.220205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.220231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.220323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.220350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.220442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.220468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.220598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.220632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.220741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.220776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.220909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.220944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.221092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.221128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.221245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.221280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.221425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.221467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.221582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.221616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.221770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.221805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.221976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.222011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.222153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.222187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.222312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.222363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.222519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.222545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.222651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.222677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.222759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.222786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.222885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.222911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.223046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.223081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.223206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.223242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.223399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.223425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.010 qpair failed and we were unable to recover it. 00:28:20.010 [2024-11-19 10:56:07.223539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.010 [2024-11-19 10:56:07.223565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.223700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.223726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.223870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.223906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.224049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.224085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.224247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.224300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.224443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.224479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.224618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.224654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.224766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.224801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.224904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.224939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.225048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.225098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.225186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.225213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.225377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.225429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.225623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.225651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.225765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.225791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.225887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.225914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.226073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.226099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.226206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.226231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.226369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.226406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.226519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.226556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.226705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.226740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.226895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.226930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.227041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.227076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.227181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.227216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.227324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.227361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.227497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.227535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.227687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.227724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.227923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.227949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.228038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.228071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.228153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.228179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.011 qpair failed and we were unable to recover it. 00:28:20.011 [2024-11-19 10:56:07.228321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.011 [2024-11-19 10:56:07.228357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.228466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.228503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.228656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.228694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.228847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.228884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.229025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.229062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.229180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.229217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.229331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.229369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.229499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.229525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.229619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.229645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.229847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.229884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.230005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.230043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.230176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.230213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.230347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.230385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.230565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.230601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.230817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.230854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.231002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.231039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.231169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.231221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.231329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.231376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.231467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.231493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.231603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.231649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.231775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.231825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.231991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.232025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.232196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.232233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.232402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.232438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.232548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.232583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.232744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.232800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.232969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.233007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.233166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.233203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.233367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.012 [2024-11-19 10:56:07.233402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:20.012 qpair failed and we were unable to recover it. 00:28:20.012 [2024-11-19 10:56:07.233516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.233549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.233683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.233719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.233837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.233874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.234017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.234049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.234170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.234205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.234371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.234411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.234595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.234632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.234760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.234797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.234944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.234983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.235105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.235142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.235275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.235319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.235449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.235486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.235597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.235634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.235784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.235822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.235936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.235973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.236125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.236161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.236318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.236356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.236473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.236510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.236668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.236701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.236808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.236842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.236966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.237003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.237155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.237192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.237344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.237382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.237499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.237536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.237636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.237672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.237842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.013 [2024-11-19 10:56:07.237876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.013 qpair failed and we were unable to recover it. 00:28:20.013 [2024-11-19 10:56:07.237996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.238031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.238164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.238203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.238363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.238416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.238523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.238557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.238747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.238793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.238993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.239030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.239182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.239219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.239377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.239404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.239522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.239548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.239655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.239692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.239817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.239861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.240020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.240057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.240207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.240243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.240413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.240448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.240555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.240589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.240697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.240730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.240832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.240867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.240982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.241016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.241132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.241165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.241296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.241337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.241451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.241489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.241656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.241689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.241807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.241841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.241996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.242033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.242197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.242234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.242357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.242396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.242553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.242590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.242742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.242779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.242946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.014 [2024-11-19 10:56:07.242983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.014 qpair failed and we were unable to recover it. 00:28:20.014 [2024-11-19 10:56:07.243114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.243151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.243266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.243323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.243453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.243497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.243614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.243651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.243837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.243874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.244001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.244036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.244144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.244181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.244319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.244371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.244559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.244586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.244711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.244738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.244869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.244908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.245038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.245076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.245209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.245248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.245412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.245451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.245605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.245644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.245802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.245839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.246022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.246059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.246208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.246247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.246409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.246447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.246635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.246672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.246825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.246864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.247061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.247104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.247211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.247257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.247371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.247397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.247529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.247563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.247714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.247754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.247957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.247991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.248098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.248132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.248245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.248300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.015 qpair failed and we were unable to recover it. 00:28:20.015 [2024-11-19 10:56:07.248471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.015 [2024-11-19 10:56:07.248497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.248634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.248660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.248790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.248824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.248932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.248967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.249089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.249123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.249286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.249327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.249506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.249533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.249641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.249666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.249805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.249831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.249949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.249990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.250134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.250176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.250349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.250393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.250503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.250529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.250650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.250690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.250850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.250889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.251045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.251084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.251212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.251251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.251424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.251464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.251634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.251676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.251785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.251811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.252005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.252031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.252152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.252178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.252260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.252317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.252469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.252494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.252633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.252659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.252763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.252797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.252941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.252974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.253086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.253120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.253284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.253333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.016 qpair failed and we were unable to recover it. 00:28:20.016 [2024-11-19 10:56:07.253495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.016 [2024-11-19 10:56:07.253534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.253692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.253731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.253906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.253947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.254123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.254168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.254353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.254380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.254474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.254500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.254632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.254671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.254824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.254862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.254999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.255038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.255172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.255212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.255353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.255388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.255507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.255542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.255670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.255709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.255873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.255912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.256075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.256113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.256316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.256356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.256558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.256593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.256777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.256818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.256985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.257030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.257164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.257191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.257295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.257341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.257528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.257554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.257673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.257699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.257815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.257853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.258016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.258042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.258128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.258155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.258271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.258297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.258405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.258439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.258565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.258606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.017 qpair failed and we were unable to recover it. 00:28:20.017 [2024-11-19 10:56:07.258769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.017 [2024-11-19 10:56:07.258808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.258978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.259017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.259182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.259221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.259373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.259414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.259552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.259611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.259761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.259787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.259876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.259929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.260071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.260111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.260262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.260310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.260512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.260538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.260651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.260677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.260772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.260798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.260949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.260987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.261148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.261186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.261316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.261363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.261532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.261566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.261695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.261729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.261880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.261915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.262159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.262193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.262332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.262387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.262558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.262599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.262768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.262810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.262937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.262980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.263153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.263192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.263339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.263366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.263479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.263505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.263658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.263714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.263862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.263904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.264088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.264127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.018 [2024-11-19 10:56:07.264287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.018 [2024-11-19 10:56:07.264345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.018 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.264474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.264514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.264643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.264682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.264866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.264906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.265114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.265152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.265312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.265353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.265509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.265567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.265699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.265740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.265917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.265957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.266133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.266173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.266346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.266389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.266586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.266628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.266829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.266871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.267032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.267087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.267222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.267262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.267444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.267470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.267563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.267589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.267726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.267760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.267935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.267975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.268186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.268225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.268380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.268420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.268594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.268650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.268898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.268955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.269151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.269189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.269367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.019 [2024-11-19 10:56:07.269394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.019 qpair failed and we were unable to recover it. 00:28:20.019 [2024-11-19 10:56:07.269484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.269515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.269601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.269629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.269717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.269770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.269958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.269993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.270127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.270160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.270319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.270378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.270504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.270547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.270715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.270757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.270939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.270983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.271219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.271257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.271450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.271494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.271665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.271709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.271872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.271932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.272095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.272134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.272273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.272322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.272461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.272503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.272710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.272743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.272860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.272894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.273047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.273072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.273180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.273206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.273296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.273327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.273420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.273446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.273605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.273648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.273823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.273884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.274083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.274122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.274248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.274288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.274484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.274524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.274762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.274788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.274901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.020 [2024-11-19 10:56:07.274928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.020 qpair failed and we were unable to recover it. 00:28:20.020 [2024-11-19 10:56:07.275018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.275044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.275178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.275216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.275405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.275445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.275590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.275616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.275730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.275756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.275872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.275930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.276056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.276096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.276247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.276286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.276433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.276477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.276598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.276641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.276839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.276885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.277096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.277147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.277327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.277368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.277569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.277626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.277757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.277810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.277922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.277956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.278067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.278101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.278246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.278281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.278455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.278499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.278693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.278736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.278941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.278975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.279092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.279127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.279336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.279372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.279509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.279543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.279689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.279723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.279925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.279969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.280206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.280240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.280388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.280447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.280627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.021 [2024-11-19 10:56:07.280670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.021 qpair failed and we were unable to recover it. 00:28:20.021 [2024-11-19 10:56:07.280844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.280899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.281011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.281037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.281228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.281254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.281348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.281376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.281460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.281485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.281685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.281718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.281852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.281886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.282029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.282063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.282222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.282261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.282441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.282481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.282652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.282696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.282836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.282882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.282965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.282991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.283155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.283193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.283334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.283374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.283524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.283568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.283750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.283789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.283926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.283984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.284149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.284188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.284342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.284368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.284464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.284490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.284722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.284766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.284896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.284958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.285169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.285204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.285316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.285351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.285506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.285552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.285696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.285742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.285888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.285936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.022 [2024-11-19 10:56:07.286135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.022 [2024-11-19 10:56:07.286174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.022 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.286318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.286358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.286527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.286571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.286749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.286809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.286954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.287012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.287137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.287178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.287343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.287383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.287527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.287570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.287779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.287835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.287991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.288034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.288218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.288257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.288465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.288501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.288701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.288766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.288935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.288978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.289149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.289188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.289350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.289384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.289531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.289565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.289707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.289750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.289889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.289934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.290078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.290117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.290257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.290312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.290497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.290543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.290681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.290740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.290905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.290939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.291108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.291147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.291282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.291361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.291511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.291556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.291704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.291749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.291891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.291934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.292112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.023 [2024-11-19 10:56:07.292151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.023 qpair failed and we were unable to recover it. 00:28:20.023 [2024-11-19 10:56:07.292318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.292379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.292578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.292616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.292810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.292869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.293037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.293076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.293269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.293321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.293474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.293530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.293681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.293724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.293870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.293913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.294051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.294103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.294218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.294252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.294399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.294436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.294671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.294717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.294903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.294948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.295158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.295197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.295352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.295392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.295533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.295579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.295749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.295794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.296029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.296073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.296249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.296288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.296449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.296492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.296636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.296680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.296856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.296899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.297065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.297108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.297307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.297335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.297428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.297456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.297568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.297611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.297774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.297817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.297953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.297998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.024 qpair failed and we were unable to recover it. 00:28:20.024 [2024-11-19 10:56:07.298159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.024 [2024-11-19 10:56:07.298199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.298388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.298427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.298607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.298650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.298784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.298828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.299008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.299054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.299248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.299287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.299464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.299521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.299609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.299634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.299718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.299744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.299834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.299861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.299954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.299998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.300167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.300206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.300382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.300409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.300550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.300576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.300707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.300741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.300859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.300893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.301067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.301113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.301232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.301285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.301437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.301471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.301612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.301657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.301746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.301772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.301860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.301886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.302024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.302063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.302199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.302238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.302440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.302486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.302637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.302683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.025 [2024-11-19 10:56:07.302840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.025 [2024-11-19 10:56:07.302886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.025 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.303059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.303120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.303251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.303291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.303451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.303497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.303697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.303743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.303922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.303968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.304171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.304206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.304344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.304378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.304529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.304575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.304784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.304831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.305082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.305108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.305243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.305269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.305415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.305454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.305605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.305653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.305812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.305860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.306040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.306085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.306310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.306350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.306497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.306556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.306720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.306765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.306967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.306993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.307109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.307135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.307252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.307277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.307400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.307427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.307600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.307646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.307828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.307863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.307972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.308005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.308128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.308168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.308288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.308338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.308485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.308531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.026 qpair failed and we were unable to recover it. 00:28:20.026 [2024-11-19 10:56:07.308746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.026 [2024-11-19 10:56:07.308787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.308969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.309022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.309202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.309241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.309403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.309465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.309625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.309664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.309877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.309904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.310018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.310044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.310127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.310179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.310349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.310384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.310527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.310561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.310709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.310763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.310955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.311001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.311153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.311193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.311382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.311422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.311578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.311612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.311737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.311771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.311978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.312024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.312215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.312256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.312481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.312534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.312695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.312741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.312915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.312961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.313165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.313191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.313272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.313298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.313496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.313522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.313632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.313658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.313742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.313793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.313974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.314020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.314187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.314228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.314442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.027 [2024-11-19 10:56:07.314489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.027 qpair failed and we were unable to recover it. 00:28:20.027 [2024-11-19 10:56:07.314733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.314759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.314880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.314906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.315053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.315087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.315226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.315260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.315455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.315502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.315652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.315710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.315929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.315955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.316047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.316074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.316190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.316216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.316312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.316339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.316448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.316474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.316589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.316616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.316699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.316725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.316817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.316843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.317015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.317066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.317203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.317229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.317347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.317374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.317479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.317505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.317650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.317689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.317896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.317942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.318128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.318167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.318378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.318404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.318504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.318530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.318638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.318664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.318749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.318775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.318914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.318973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.319168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.319207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.319409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.319457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.319623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.028 [2024-11-19 10:56:07.319657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.028 qpair failed and we were unable to recover it. 00:28:20.028 [2024-11-19 10:56:07.319798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.319825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.319938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.319964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.320123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.320162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.320288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.320359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.320533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.320580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.320773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.320818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.320971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.321016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.321211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.321244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.321380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.321415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.321535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.321569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.321698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.321737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.321877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.321911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.322058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.322092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.322283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.322329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.322469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.322515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.322696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.322743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.322947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.322995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.323208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.323247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.323413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.323453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.323639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.323684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.323835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.323881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.324074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.324120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.324347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.324387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.324569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.324615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.324831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.324878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.325062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.325108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.325298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.325344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.325486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.325532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.325720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.029 [2024-11-19 10:56:07.325767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.029 qpair failed and we were unable to recover it. 00:28:20.029 [2024-11-19 10:56:07.325994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.326040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.326180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.326206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.326298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.326331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.326424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.326450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.326536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.326562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.326677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.326703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.326789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.326817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.326962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.326988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.327074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.327101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.327222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.327249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.327337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.327364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.327454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.327480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.327567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.327594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.327703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.327729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.327842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.327868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.328004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.328039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.328171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.328211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.328425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.328473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.328649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.328696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.328882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.328926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.329130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.329169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.329339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.329391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.329529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.329555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.329632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.329658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.329768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.329794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.329877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.329903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.330009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.330035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.330151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.330200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.330361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.030 [2024-11-19 10:56:07.330405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.030 qpair failed and we were unable to recover it. 00:28:20.030 [2024-11-19 10:56:07.330552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.330597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.330787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.330812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.330892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.330919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.331066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.331100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.331244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.331270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.331391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.331418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.331573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.331617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.331760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.331805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.331992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.332035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.332218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.332262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.332391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.332418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.332563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.332624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.332776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.332837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.333111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.333137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.333250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.333276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.333366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.333393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.333509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.333535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.333614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.333672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.333856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.333900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.334045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.334090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.334293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.334325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.334410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.334436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.334548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.334582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.334797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.334840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.031 [2024-11-19 10:56:07.335022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.031 [2024-11-19 10:56:07.335064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.031 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.335254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.335293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.335466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.335506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.335740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.335787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.336043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.336077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.336197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.336232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.336420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.336461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.336640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.336706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.336908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.336948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.337090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.337137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.337246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.337272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.337424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.337477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.337659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.337686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.337805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.337831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.337941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.337986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.338152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.338191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.338352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.338392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.338590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.338624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.338725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.338759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.338856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.338890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.339070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.339096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.339233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.339259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.339383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.339417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.339548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.339593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.339792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.339831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.342467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.342528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.342805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.342871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.343096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.343152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.343270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.343341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.032 [2024-11-19 10:56:07.343570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.032 [2024-11-19 10:56:07.343610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.032 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.343817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.343879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.344102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.344129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.344268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.344294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.344411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.344437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.344544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.344606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.344794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.344856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.345072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.345098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.345187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.345213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.345298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.345329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.345419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.345445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.345553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.345578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.345696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.345722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.345834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.345875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.346031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.346070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.346199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.346241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.346430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.346489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.346658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.346694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.346836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.346870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.347022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.347054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.347171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.347197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.347338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.347365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.347545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.347588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.347720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.347763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.347954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.347998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.348195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.348236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.348383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.348423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.348611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.348655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.348834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.033 [2024-11-19 10:56:07.348878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.033 qpair failed and we were unable to recover it. 00:28:20.033 [2024-11-19 10:56:07.349004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.349048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.349206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.349245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.349395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.349438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.349556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.349600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.349787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.349831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.350011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.350054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.350240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.350280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.350479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.350505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.350624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.350650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.350738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.350797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.350952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.350995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.351146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.351185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.351346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.351387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.351559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.351620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.351774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.351817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.352001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.352035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.352169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.352215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.352334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.352361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.352447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.352474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.352595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.352639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.352782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.352825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.353053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.353092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.353223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.353261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.353396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.353437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.353610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.353674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.353869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.353903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.354065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.354110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.354331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.034 [2024-11-19 10:56:07.354371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.034 qpair failed and we were unable to recover it. 00:28:20.034 [2024-11-19 10:56:07.354547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.354614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.354781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.354851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.354988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.355040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.355190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.355228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.355368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.355408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.355595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.355621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.355737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.355763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.355873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.355916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.356101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.356162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.356296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.356343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.356530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.356573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.356746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.356789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.356972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.357006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.357123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.357157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.357296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.357344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.357532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.357566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.357680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.357714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.357866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.357908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.358089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.358128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.358284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.358347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.358524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.358583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.358781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.358825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.359056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.359090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.359258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.359284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.359403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.359430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.359557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.359600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.359793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.359827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.359999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.360025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.360115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.035 [2024-11-19 10:56:07.360141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.035 qpair failed and we were unable to recover it. 00:28:20.035 [2024-11-19 10:56:07.360230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.360257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.360350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.360377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.360498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.360524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.360610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.360660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.360834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.360888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.361062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.361088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.361177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.361204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.361318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.361345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.361439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.361465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.361560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.361586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.361732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.361777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.361933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.361977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.362143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.362183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.362318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.362386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.362560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.362586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.362679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.362706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.362842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.362886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.363062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.363096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.363214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.363248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.363422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.363462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.363619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.363662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.363778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.363804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.363911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.363955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.364142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.364182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.364351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.364391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.364570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.364613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.364753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.364799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.365010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.365043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.036 [2024-11-19 10:56:07.365192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.036 [2024-11-19 10:56:07.365226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.036 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.365390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.365430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.365604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.365649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.365841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.365875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.366024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.366078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.366208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.366248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.366465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.366528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.366735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.366786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.366913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.366947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.367114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.367149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.367351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.367390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.367548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.367592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.367775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.367818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.367966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.368007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.368088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.368115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.368237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.368277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.368463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.368506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.368694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.368755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.368903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.368947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.369128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.369167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.369344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.369388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.369556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.369591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.369705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.369740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.369953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.370008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.370184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.370225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.370361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.037 [2024-11-19 10:56:07.370406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.037 qpair failed and we were unable to recover it. 00:28:20.037 [2024-11-19 10:56:07.370583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.370627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.370810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.370870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.371069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.371113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.371310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.371350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.371507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.371548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.371684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.371730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.371904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.371965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.372111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.372149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.372311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.372351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.372541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.372584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.372727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.372771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.372931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.372974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.373125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.373164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.373356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.373396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.373591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.373625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.373766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.373805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.374001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.374035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.374182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.374215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.374382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.374422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.374561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.374615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.374754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.374788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.374924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.374967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.375095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.375150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.375317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.375357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.375514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.375570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.375717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.375762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.375934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.375977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.376127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.376165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.376331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.038 [2024-11-19 10:56:07.376371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.038 qpair failed and we were unable to recover it. 00:28:20.038 [2024-11-19 10:56:07.376595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.376631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.376793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.376818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.376928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.376954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.377047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.377074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.377162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.377209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.377425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.377460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.377579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.377613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.377825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.377868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.378060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.378094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.378226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.378260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.378445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.378519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.378753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.378813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.378938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.378982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.379194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.379228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.379373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.379423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.379588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.379623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.379765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.379799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.379942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.379976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.380152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.380190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.380349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.380389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.380516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.380542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.380705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.380769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.380903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.380946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.381126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.381165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.381318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.381359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.381510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.381554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.381685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.381729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.381898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.381941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.382113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.039 [2024-11-19 10:56:07.382152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.039 qpair failed and we were unable to recover it. 00:28:20.039 [2024-11-19 10:56:07.382290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.382379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.382508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.382552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.382702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.382745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.382931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.382965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.383096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.383129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.383312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.383374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.383508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.383552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.383687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.383729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.384012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.384056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.384250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.384289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.384443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.384483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.384655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.384700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.384875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.384919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.385119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.385162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.385350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.385390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.385638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.385701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.385934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.385995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.386145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.386184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.386387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.386422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.386590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.386624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.386801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.386845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.387033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.387073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.387222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.387272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.387418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.387459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.387589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.387632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.387826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.387860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.388031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.388082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.388227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.388265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.388414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.388472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.388693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.388737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.388911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.040 [2024-11-19 10:56:07.388954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.040 qpair failed and we were unable to recover it. 00:28:20.040 [2024-11-19 10:56:07.389118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.389177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.389340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.389380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.389560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.389603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.389793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.389827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.389942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.389977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.390099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.390138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.390298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.390380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.390587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.390631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.390763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.390807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.391009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.391053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.391226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.391276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.391399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.391433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.391589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.391655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.391856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.391899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.392072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.392110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.392236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.392275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.392393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.392433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.392618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.392682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.392867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.392911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.393123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.393157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.393294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.393352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.393498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.393563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.393707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.393752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.393960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.394003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.394174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.394212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.394353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.394393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.394547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.394617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.394854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.394913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.395078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.395123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.395315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.395355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.395595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.395635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.041 [2024-11-19 10:56:07.395747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.041 [2024-11-19 10:56:07.395782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.041 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.395972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.396015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.396161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.396200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.396355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.396396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.396574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.396633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.396757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.396783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.396921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.396946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.397039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.397066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.397178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.397204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.397294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.397325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.397405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.397430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.397520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.397546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.397638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.397664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.397748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.397775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.397898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.397924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.398033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.398072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.398223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.398262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.398466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.398510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.398685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.398719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.398868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.398902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.399113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.399152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.399324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.399385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.399592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.399660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.399797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.399840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.399989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.400028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.400195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.400234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.400446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.400486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.400689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.400749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.400967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.401000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.401146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.401200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.401325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.401366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.401551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.042 [2024-11-19 10:56:07.401614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.042 qpair failed and we were unable to recover it. 00:28:20.042 [2024-11-19 10:56:07.401811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.401874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.402048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.402091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.402291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.402340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.402473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.402507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.402624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.402657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.402781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.402815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.402987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.403029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.403178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.403225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.403415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.403455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.403611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.403674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.403845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.403888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.404138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.404181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.404337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.404378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.404527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.404589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.404828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.404862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.404978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.405012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.405159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.405194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.405363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.405403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.405531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.405570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.405746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.405790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.405917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.405961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.406165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.406204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.406403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.406438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.406608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.406643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.406852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.406914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.407082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.407139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.407324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.407363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.407565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.407598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.407713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.407747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.407911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.043 [2024-11-19 10:56:07.407955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.043 qpair failed and we were unable to recover it. 00:28:20.043 [2024-11-19 10:56:07.408146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.408186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.408318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.408358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.408506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.408570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.408793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.408855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.408993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.409037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.409247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.409286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.409411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.409464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.409611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.409644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.409824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.409868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.410051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.410096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.410281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.410340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.410488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.410533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.410738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.410777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.410974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.411034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.411207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.411247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.411484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.411548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.411710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.411772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.411981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.412032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.412214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.412254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.412388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.412427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.412596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.412658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.412814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.412876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.413116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.413150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.413264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.413300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.413478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.413522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.413737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.413771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.413910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.413944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.414054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.414088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.414201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.414236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.414376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.414412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.414539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.414582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.414734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.414778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.414915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.414959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.415145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.044 [2024-11-19 10:56:07.415178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.044 qpair failed and we were unable to recover it. 00:28:20.044 [2024-11-19 10:56:07.415290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.415332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.415517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.415561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.415706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.415751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.415936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.415994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.416201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.416236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.416352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.416387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.416597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.416660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.416847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.416907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.417116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.417155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.417321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.417356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.417498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.417532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.417688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.417743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.417883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.417917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.418027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.418061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.418189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.418228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.418383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.418449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.418634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.418678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.418870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.418913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.419052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.419095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.419309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.419344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.419470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.419505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.419742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.419805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.420032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.420093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.420282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.420340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.420475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.420514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.420697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.420758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.420984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.421046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.421254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.421293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.421497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.421536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.421687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.421753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.422001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.422063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.422224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.422265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.422450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.422489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.422644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.045 [2024-11-19 10:56:07.422703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.045 qpair failed and we were unable to recover it. 00:28:20.045 [2024-11-19 10:56:07.422926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.422969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.423102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.423142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.423292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.423368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.423547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.423581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.423684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.423718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.423885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.423919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.424144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.424183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.424364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.424418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.424583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.424652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.424859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.424922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.425058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.425104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.425259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.425299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.425501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.425540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.425742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.425787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.425925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.425970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.426177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.426216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.426359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.426409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.426584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.426647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.426841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.426903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.427076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.427119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.427296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.427344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.427508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.427547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.427695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.427759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.427931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.427974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.428130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.428169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.428300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.428348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.428495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.428538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.428688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.428750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.428968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.429012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.429204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.429243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.429380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.429420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.429574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.429618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.429750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.429792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.429944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.046 [2024-11-19 10:56:07.429987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.046 qpair failed and we were unable to recover it. 00:28:20.046 [2024-11-19 10:56:07.430135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.430174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.430331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.430372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.430552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.430596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.430733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.430777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.430912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.430955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.431102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.431142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.431299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.431384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.431591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.431634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.431818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.431864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.432069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.432113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.432300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.432348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.432509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.432549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.432729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.432791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.432944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.432987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.433163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.433201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.433389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.433429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.433600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.433662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.433876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.433919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.434045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.434090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.434247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.434286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.434461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.434505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.434650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.434693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.434897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.434947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.435127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.435170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.435335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.435377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.435592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.435635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.435805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.435849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.436028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.436084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.436296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.436344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.436470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.436511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.436659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.436721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.436927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.436994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.437164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.437204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.047 [2024-11-19 10:56:07.437364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.047 [2024-11-19 10:56:07.437403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.047 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.437589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.437655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.437840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.437909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.438116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.438160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.438317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.438357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.438533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.438595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.438806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.438869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.439053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.439097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.439272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.439329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.439472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.439518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.439698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.439760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.439998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.440059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.440240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.440279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.440474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.440534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.440700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.440774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.440988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.441050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.441232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.441273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.441469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.441510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.441696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.441739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.441937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.441980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.442133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.442172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.442340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.442380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.442579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.442619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.442832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.442894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.443079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.443123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.443279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.443326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.443519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.443580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.443766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.443830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.444001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.444046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.444208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.444254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.444415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.444456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.444600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.048 [2024-11-19 10:56:07.444667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.048 qpair failed and we were unable to recover it. 00:28:20.048 [2024-11-19 10:56:07.444836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.444880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.445053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.445111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.445292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.445342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.445506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.445546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.445677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.445744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.445918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.445980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.446167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.446208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.446359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.446400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.446578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.446639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.446879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.446944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.447148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.447191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.447390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.447430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.447557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.447615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.447815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.447879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.448018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.448063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.448242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.448282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.448477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.448541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.448775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.448836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.448968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.449011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.449184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.449224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.449384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.449424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.449630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.449696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.449880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.449943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.450110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.450167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.450315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.450356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.450586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.450648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.450864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.450927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.451094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.451133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.451336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.451399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.451569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.451612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.451735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.451778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.451977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.452020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.452181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.452220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.452385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.452425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.452650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.049 [2024-11-19 10:56:07.452711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.049 qpair failed and we were unable to recover it. 00:28:20.049 [2024-11-19 10:56:07.452887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.452930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.453076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.453122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.453353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.453415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.453651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.453735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.453879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.453923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.454107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.454151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.454311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.454351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.454481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.454525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.454743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.454809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.454980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.455024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.455171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.455210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.455371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.455413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.455625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.455669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.455814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.455859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.456081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.456120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.456289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.456338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.456483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.456527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.456698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.456741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.456916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.456978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.457155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.457194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.457324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.457365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.457581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.457643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.457845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.457906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.458072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.458115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.458293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.458341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.458497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.458538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.458749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.458789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.459037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.459080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.459240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.459280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.459464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.459504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.459702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.459742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.459900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.459944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.460092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.460132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.460276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.460347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.460495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.460556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.460742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.460785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.050 [2024-11-19 10:56:07.460969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.050 [2024-11-19 10:56:07.461013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.050 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.461226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.461266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.461459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.461522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.461718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.461781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.461953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.461998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.462171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.462210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.462427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.462473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.462624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.462688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.462816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.462859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.463034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.463092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.463255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.463293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.463515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.463578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.463752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.463820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.464007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.464050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.464195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.464236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.464411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.464451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.464688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.464750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.464928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.464994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.465149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.465188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.465345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.465385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.465573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.465630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.465794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.465837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.465978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.466021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.466175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.466214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.466338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.466380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.466538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.466577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.466770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.466813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.467016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.467059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.467236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.467274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.467436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.467480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.467647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.467703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.467930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.467975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.468130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.468168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.468311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.468372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.468577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.468621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.468778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.468821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.469067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.051 [2024-11-19 10:56:07.469110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.051 qpair failed and we were unable to recover it. 00:28:20.051 [2024-11-19 10:56:07.469269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.469315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.469452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.469492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.469666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.469722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.469917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.469956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.470085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.470125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.470320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.470360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.470514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.470581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.470749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.470809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.470995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.471057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.471206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.471284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.471433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.471472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.471645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.471688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.471835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.471877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.472047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.472090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.472309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.472349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.472495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.472535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.472684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.472727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.472869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.472912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.473105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.473144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.473275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.473322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.473498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.473543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.473740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.473798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.473973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.474039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.474231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.474271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.474464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.474508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.474645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.474689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.474870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.474938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.475105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.475145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.475265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.475320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.475489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.475561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.475734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.475793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.475967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.476012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.476157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.476196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.476357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.476397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.476550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.476612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.476809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.052 [2024-11-19 10:56:07.476853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.052 qpair failed and we were unable to recover it. 00:28:20.052 [2024-11-19 10:56:07.477064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.477106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.477263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.477309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.477448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.477491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.477634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.477678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.477821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.477864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.478015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.478059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.478215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.478255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.478450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.478495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.478663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.478707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.478910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.478953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.479134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.479173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.479331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.479371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.479572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.479617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.479831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.479902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.480072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.480111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.480229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.480268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.480440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.480480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.480626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.480694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.480834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.480877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.481013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.481056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.481235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.481274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.481406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.481468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.481669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.481714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.481880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.481924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.482129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.482173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.482356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.482398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.482615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.482659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.482871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.482934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.483113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.483170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.053 qpair failed and we were unable to recover it. 00:28:20.053 [2024-11-19 10:56:07.483339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.053 [2024-11-19 10:56:07.483380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.483516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.483560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.483777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.483840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.483991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.484038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.484228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.484267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.484462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.484501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.484755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.484823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.484998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.485042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.485193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.485234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.485434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.485474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.485663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.485725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.485906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.485950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.486137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.486177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.486332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.486373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.486571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.486634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.486825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.486887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.487061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.487106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.487247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.487286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.487456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.487495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.487701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.487768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.487937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.487980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.488157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.488197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.488354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.488393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.488536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.488580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.488714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.488765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.488998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.489041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.054 qpair failed and we were unable to recover it. 00:28:20.054 [2024-11-19 10:56:07.489208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.054 [2024-11-19 10:56:07.489248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.489409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.489449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.489611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.489655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.489877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.489938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.490111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.490149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.490269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.490315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.490471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.490515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.490719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.490762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.490960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.491004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.491152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.491193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.491339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.491380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.491580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.491642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.491826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.491869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.492019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.492062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.492244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.492283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.492426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.492466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.492647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.492710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.492873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.492937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.493090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.493129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.493325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.493369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.493531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.493595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.493755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.493799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.493991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.494034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.494241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.494280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.494473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.494516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.494726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.494791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.494963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.495007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.495188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.495228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.495390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.495431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.495676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.495739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.495945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.496007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.055 [2024-11-19 10:56:07.496192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.055 [2024-11-19 10:56:07.496231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.055 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.496417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.496457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.496640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.496684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.496887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.496957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.497112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.497155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.497327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.497388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.497570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.497638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.497838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.497909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.498074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.498118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.498267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.498331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.498504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.498574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.498822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.498885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.499040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.499086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.499280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.499316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.499410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.499439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.499557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.499587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.499740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.499769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.499888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.499916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.500033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.500062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.500201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.500241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.500419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.500466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.500567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.500596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.500782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.500811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.500967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.501018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.501158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.501197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.501350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.501390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.501553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.501618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.501791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.501849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.502029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.502073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.502287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.502335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.502485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.502524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.502673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-11-19 10:56:07.502728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.056 qpair failed and we were unable to recover it. 00:28:20.056 [2024-11-19 10:56:07.502867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.502911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.503050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.503089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.503242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.503281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.503410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.503471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.503624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.503666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.503864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.503908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.504042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.504086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.504262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.504308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.504475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.504519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.504661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.504706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.504846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.504890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.505116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.505155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.505282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.505330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.505513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.505556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.505688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.505745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.505911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.505958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.506092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.506131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.506336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.506380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.506517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.506562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.506710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.506770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.506939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.506982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.507169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.507208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.507356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.507397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.507592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.507635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.507770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.507814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.507985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.508028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.508195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.508234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.508452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.508512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.508754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.508810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.509067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.509124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.509325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.057 [2024-11-19 10:56:07.509368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.057 qpair failed and we were unable to recover it. 00:28:20.057 [2024-11-19 10:56:07.509509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.509568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.509741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.509794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.509981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.510033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.510260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.510299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.510434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.510474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.510677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.510721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.510877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.510927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.511163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.511202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.511325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.511364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.511524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.511563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.511806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.511857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.512133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.512189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.512355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.512413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.512600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.512638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.512762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.512800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.512971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.513033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.513229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.513270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.513448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.513507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.513656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.513697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.513856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.513922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.514120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.514184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.514375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.514418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.514605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.514670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.514882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.514945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.515079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.515132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.515324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.515365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.515561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.515624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.515851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.515916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-11-19 10:56:07.516124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.058 [2024-11-19 10:56:07.516167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.516350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.516389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.516585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.516650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.516845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.516908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.517075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.517118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.517264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.517312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.517509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.517573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.517777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.517840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.518043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.518087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.518260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.518300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.518528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.518590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.518762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.518829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.519066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.519109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.519336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.519397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.519558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.519628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.519901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.519964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.520157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.520217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.520380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.520421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.520628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.520694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.520932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.520995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.521224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.521263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.521437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.521477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.521676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.521739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.521978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.522070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.522251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.522339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.522474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.522514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.522667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.522718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.522934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.523000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.523185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.059 [2024-11-19 10:56:07.523236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-11-19 10:56:07.523465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.523508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.523684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.523745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.523901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.523961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.524171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.524214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.524425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.524465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.524669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.524736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.524944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.525007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.525195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.525233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.525375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.525415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.525605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.525664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.525841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.525913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.526089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.526132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.526290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.526339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.526531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.526595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.526788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.526852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.527022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.527065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.527279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.527326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.527470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.527537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.527712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.527756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.527884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.527927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.528090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.528133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.528271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.528318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.528457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.528522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.528659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.528702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.528898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.528955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.529137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.529176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.529313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.529353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.529500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.529543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.529705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.529748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.529894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.529937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.530095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.060 [2024-11-19 10:56:07.530134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-11-19 10:56:07.530275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.530330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.530485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.530529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.530730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.530789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.530943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.530994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.531155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.531194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.531337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.531378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.531572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.531612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.531796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.531839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.532034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.532077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.532229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.532270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.532436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.532476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.532694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.532754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.532956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.533000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.533218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.533257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.533397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.533437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.533640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.533708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.533864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.533947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.534150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.534190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.534376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.534415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.534629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.534693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.534890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.534929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.535183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.535221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.535353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.535393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.535570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.535647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.535847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.535886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.536043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.536087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.536234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.536273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.536468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.536508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.536684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.536748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.536920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.536963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.537186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.537226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.061 qpair failed and we were unable to recover it. 00:28:20.061 [2024-11-19 10:56:07.537381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.061 [2024-11-19 10:56:07.537421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.537630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.537692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.537926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.537990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.538176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.538215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.538365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.538405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.538610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.538676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.538825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.538864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.539067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.539111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.539299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.539346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.539502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.539573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.539706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.539750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.539984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.540045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.540206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.540253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.540422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.540461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.540657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.540726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.540924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.540988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.541162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.541200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.541388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.541428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.541612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.541673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.541884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.541939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.542104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.542143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.542294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.542381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.542496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.542559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.542768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.542808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.543033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.543076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.543277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.543328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.543543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.543604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.543822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.543878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.544011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.544054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.544216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.544254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.544394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.062 [2024-11-19 10:56:07.544435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.062 qpair failed and we were unable to recover it. 00:28:20.062 [2024-11-19 10:56:07.544638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.544702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.544899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.544962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.545117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.545158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.545322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.545363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.545610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.545654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.545812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.545878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.546051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.546095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.546280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.546335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.546503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.546568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.546756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.546818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.547001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.547044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.547229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.547270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.547441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.547481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.547620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.547665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.547810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.547855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.548018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.548077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.548223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.548264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.548410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.548449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.548678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.548742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.548887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.548947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.549136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.549175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.549323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.549369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.549510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.549576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.549783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.549826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.549968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.550013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.550183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.550239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.550387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.550427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.550571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.550628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.550774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.550816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.550958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.551001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.551142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.551184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.063 qpair failed and we were unable to recover it. 00:28:20.063 [2024-11-19 10:56:07.551362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.063 [2024-11-19 10:56:07.551401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.551611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.551653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.551810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.551852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.552037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.552098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.552289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.552337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.552493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.552552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.552725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.552767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.552939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.552981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.553181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.553223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.553408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.553446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.553668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.553711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.553910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.553973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.554172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.554215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.554404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.554444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.554634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.554701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.554870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.554913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.555086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.555129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.555313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.555374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.555538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.555576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.555702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.555744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.555885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.555930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.556138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.556194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.556376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.556417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.556567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.556604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.556748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.556791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.556964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.557007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.557155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.557216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.557342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.064 [2024-11-19 10:56:07.557381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.064 qpair failed and we were unable to recover it. 00:28:20.064 [2024-11-19 10:56:07.557542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.557580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.557726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.557770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.557967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.558029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.558248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.558287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.558459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.558497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.558681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.558741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.558934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.558997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.559183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.559226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.559404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.559443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.559629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.559685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.559831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.559894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.560061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.560103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.560260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.560298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.560489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.560527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.560671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.560728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.560878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.560920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.561129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.561172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.561358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.561397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.561552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.561590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.561754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.561798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.561945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.561989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.562149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.562191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.562346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.562384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.562503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.562541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.562696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.562735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.562919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.562963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.563148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.563190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.563349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.563388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.563572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.563611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.563742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.563780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.563929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.065 [2024-11-19 10:56:07.563972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.065 qpair failed and we were unable to recover it. 00:28:20.065 [2024-11-19 10:56:07.564136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.564177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.564328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.564386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.564551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.564590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.564713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.564752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.564923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.564964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.565134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.565178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.565345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.565384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.565535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.565572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.565756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.565796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.565990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.566032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.566202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.566239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.566406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.566455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.566667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.566735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.566896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.566952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.567101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.567144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.567319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.567359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.567491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.567529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.567659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.567716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.567886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.567944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.568127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.568170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.568378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.568418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.568601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.568643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.568785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.568828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.568997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.569040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.569227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.569266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.569506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.569550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.569744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.569799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.569944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.569990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.570213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.570252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.570391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.570429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.570570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.570607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.066 qpair failed and we were unable to recover it. 00:28:20.066 [2024-11-19 10:56:07.570777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.066 [2024-11-19 10:56:07.570820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.571058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.571103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.571276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.571340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.571554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.571619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.571798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.571856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.572016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.572075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.572244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.572281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.572452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.572510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.572664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.572725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.572888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.572946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.573139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.573183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.573325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.573384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.573502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.573541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.573827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.573894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.574125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.574182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.574405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.574446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.574625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.574668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.574907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.574973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.575251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.575326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.575511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.575550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.575852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.575927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.576213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.576269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.576478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.576517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.576735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.576801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.577076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.577141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.577378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.577419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.577581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.577620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.577804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.577844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.578007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.578041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.578183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.578217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.578342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.578381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.067 qpair failed and we were unable to recover it. 00:28:20.067 [2024-11-19 10:56:07.578533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.067 [2024-11-19 10:56:07.578568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.578694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.578729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.578875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.578945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.579169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.579226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.579438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.579474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.579589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.579624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.579774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.579810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.579976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.580030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.580237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.580293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.580456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.580494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.580646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.580684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.580923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.580982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.581246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.581300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.581483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.581523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.581659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.581698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.581886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.581956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.582135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.582200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.582345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.582384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.582523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.582562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.582720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.582759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.583047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.583085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.583332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.583371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.583531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.583571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.583730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.583776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.584069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.584138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.584375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.584418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.584556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.584595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.584726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.584764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.584940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.585008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.585210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.585276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.585481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.585519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.068 [2024-11-19 10:56:07.585669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.068 [2024-11-19 10:56:07.585757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.068 qpair failed and we were unable to recover it. 00:28:20.069 [2024-11-19 10:56:07.585956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.069 [2024-11-19 10:56:07.586010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.069 qpair failed and we were unable to recover it. 00:28:20.069 [2024-11-19 10:56:07.586201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.069 [2024-11-19 10:56:07.586256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.069 qpair failed and we were unable to recover it. 00:28:20.069 [2024-11-19 10:56:07.586473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.069 [2024-11-19 10:56:07.586530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.069 qpair failed and we were unable to recover it. 00:28:20.069 [2024-11-19 10:56:07.586721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.069 [2024-11-19 10:56:07.586765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.069 qpair failed and we were unable to recover it. 00:28:20.069 [2024-11-19 10:56:07.586972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.069 [2024-11-19 10:56:07.587028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.069 qpair failed and we were unable to recover it. 00:28:20.069 [2024-11-19 10:56:07.587204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.069 [2024-11-19 10:56:07.587260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.069 qpair failed and we were unable to recover it. 00:28:20.069 [2024-11-19 10:56:07.587450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.069 [2024-11-19 10:56:07.587488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.069 qpair failed and we were unable to recover it. 00:28:20.069 [2024-11-19 10:56:07.587611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.069 [2024-11-19 10:56:07.587652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.069 qpair failed and we were unable to recover it. 00:28:20.069 [2024-11-19 10:56:07.587892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.069 [2024-11-19 10:56:07.587949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.069 qpair failed and we were unable to recover it. 00:28:20.069 [2024-11-19 10:56:07.588155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.069 [2024-11-19 10:56:07.588210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.069 qpair failed and we were unable to recover it. 00:28:20.069 [2024-11-19 10:56:07.588406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.069 [2024-11-19 10:56:07.588446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.069 qpair failed and we were unable to recover it. 00:28:20.069 [2024-11-19 10:56:07.588681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.069 [2024-11-19 10:56:07.588722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.069 qpair failed and we were unable to recover it. 00:28:20.069 [2024-11-19 10:56:07.589011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.069 [2024-11-19 10:56:07.589057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.069 qpair failed and we were unable to recover it. 00:28:20.069 [2024-11-19 10:56:07.589230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.069 [2024-11-19 10:56:07.589273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.069 qpair failed and we were unable to recover it. 00:28:20.069 [2024-11-19 10:56:07.589491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.069 [2024-11-19 10:56:07.589530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.069 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.589654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.589723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.589965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.590020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.590236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.590315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.590442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.590482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.590610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.590648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.590766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.590805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.591015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.591073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.591320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.591380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.591503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.591540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.591768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.591825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.592067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.592123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.592348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.592389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.592519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.592556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.592681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.592720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.592927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.592998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.593174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.593244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.593489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.593528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.593731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.593787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.594014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.594086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.594255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.594317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.594506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.594544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.594675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.594714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.594897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.594988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.595289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.595366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.355 qpair failed and we were unable to recover it. 00:28:20.355 [2024-11-19 10:56:07.595530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.355 [2024-11-19 10:56:07.595568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.595796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.595853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.596028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.596082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.596375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.596415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.596544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.596588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.596752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.596792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.597039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.597111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.597330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.597393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.597554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.597615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.597815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.597882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.598097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.598148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.598359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.598400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.598542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.598582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.598720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.598759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.598991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.599059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.599272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.599352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.599510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.599549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.599727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.599768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.600000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.600053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.600285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.600368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.600529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.600569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.600826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.600865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.601046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.601098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.601283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.601378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.601570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.601633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.601826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.601867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.602105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.602157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.602393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.602434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.602591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.602659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.602867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.602925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.603123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.603176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.356 qpair failed and we were unable to recover it. 00:28:20.356 [2024-11-19 10:56:07.603389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.356 [2024-11-19 10:56:07.603431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.603646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.603698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.603878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.603930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.604100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.604154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.604317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.604376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.604506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.604544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.604722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.604763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.604922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.604972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.605182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.605235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.605413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.605453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.605646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.605683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.605836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.605874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.606052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.606103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.606267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.606360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.606491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.606529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.606717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.606769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.606979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.607031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.607230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.607297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.607497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.607536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.607737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.607789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.607991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.608045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.608235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.608277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.608421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.608460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.608648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.608698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.608889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.608940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.609117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.609155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.609371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.609410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.609540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.609598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.609824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.609881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.610108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.610168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.610387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.610426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.357 [2024-11-19 10:56:07.610584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.357 [2024-11-19 10:56:07.610651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.357 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.610794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.610845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.610997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.611049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.611279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.611326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.611490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.611528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.611747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.611799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.612025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.612064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.612288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.612334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.612489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.612528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.612739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.612792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.613012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.613079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.613283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.613364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.613494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.613533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.613696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.613733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.613894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.613954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.614139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.614189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.614392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.614437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.614620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.614673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.614918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.614957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.615281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.615378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.615537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.615577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.615742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.615809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.615977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.616031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.616284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.616361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.616487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.616525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.616756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.616807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.616995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.617036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.617253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.617322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.617515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.617555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.617799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.617855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.618047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.618102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.358 qpair failed and we were unable to recover it. 00:28:20.358 [2024-11-19 10:56:07.618340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.358 [2024-11-19 10:56:07.618381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.618535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.618574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.618821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.618877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.619083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.619139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.619365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.619405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.619520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.619558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.619761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.619815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.619988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.620041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.620291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.620377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.620551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.620607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.620808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.620863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.621069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.621123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.621350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.621390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.621549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.621603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.621775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.621829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.622006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.622061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.622264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.622364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.622516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.622554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.622800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.622855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.623104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.623160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.623364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.623404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.623526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.623565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.623759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.623813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.624028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.624083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.624364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.624403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.624532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.624572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.624711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.624750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.624965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.625018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.625227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.625281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.625496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.625534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.625666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.359 [2024-11-19 10:56:07.625703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.359 qpair failed and we were unable to recover it. 00:28:20.359 [2024-11-19 10:56:07.625923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.625980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.626190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.626245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.626488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.626528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.626685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.626743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.626977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.627033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.627258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.627328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.627518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.627556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.627715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.627754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.627920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.627959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.628216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.628271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.628482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.628521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.628748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.628803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.629029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.629083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.629288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.629379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.629556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.629594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.629830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.629885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.630113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.630168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.630374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.630415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.630542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.630581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.630771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.630829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.631049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.631105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.631279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.631366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.631531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.631570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.631829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.631885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.632116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.632171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.632388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.632427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.360 qpair failed and we were unable to recover it. 00:28:20.360 [2024-11-19 10:56:07.632591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.360 [2024-11-19 10:56:07.632630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.632747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.632785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.633031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.633087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.633274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.633358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.633526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.633565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.633783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.633838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.634059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.634113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.634371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.634412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.634572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.634634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.634865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.634920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.635102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.635156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.635379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.635435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.635662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.635718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.635965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.636019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.636284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.636358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.636581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.636640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.636853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.636912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.637181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.637241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.637543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.637598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.637861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.637916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.638134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.638189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.638397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.638453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.638693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.638774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.639011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.639066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.639242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.639345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.639626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.639665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.639854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.639910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.640090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.640144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.640353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.640410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.640597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.640652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.640871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.640927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.361 [2024-11-19 10:56:07.641134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.361 [2024-11-19 10:56:07.641188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.361 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.641375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.641432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.641690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.641748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.642015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.642074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.642298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.642381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.642588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.642649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.642839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.642901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.643095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.643154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.643386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.643447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.643677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.643716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.643861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.643899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.644127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.644188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.644427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.644489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.644678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.644741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.644965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.645025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.645255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.645326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.645528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.645586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.645857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.645916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.646123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.646183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.646370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.646432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.646664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.646726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.646959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.647019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.647283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.647359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.647605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.647665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.647864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.647924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.648186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.648245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.648496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.648557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.648815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.648875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.649152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.649211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.649430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.649494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.649745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.649805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.362 [2024-11-19 10:56:07.650048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.362 [2024-11-19 10:56:07.650109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.362 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.650355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.650418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.650660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.650721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.650986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.651046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.651283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.651356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.651595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.651655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.651903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.651964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.652188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.652248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.652533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.652593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.652865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.652925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.653194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.653252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.653534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.653594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.653812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.653872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.654129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.654203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.654503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.654565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.654832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.654892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.655156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.655220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.655459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.655520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.655752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.655812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.656083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.656121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.656260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.656298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.656510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.656573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.656830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.656889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.657118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.657178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.657444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.657506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.657765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.657824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.658097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.658135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.658334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.658374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.658504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.658543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.658801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.658867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.659112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.659178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.659475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.363 [2024-11-19 10:56:07.659542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.363 qpair failed and we were unable to recover it. 00:28:20.363 [2024-11-19 10:56:07.659792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.659860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.660153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.660192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.660350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.660404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.660625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.660690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.660916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.660979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.661266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.661350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.661650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.661715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.662011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.662075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.662378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.662446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.662744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.662810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.663061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.663121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.663412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.663478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.663731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.663795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.664078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.664142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.664404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.664470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.664677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.664743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.665024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.665089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.665348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.665415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.665702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.665766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.665960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.666025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.666210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.666275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.666583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.666659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.666915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.666982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.667229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.667295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.667576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.667643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.667856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.667921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.668138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.668206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.668483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.668549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.668795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.668860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.669119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.669184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.669431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.364 [2024-11-19 10:56:07.669494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.364 qpair failed and we were unable to recover it. 00:28:20.364 [2024-11-19 10:56:07.669755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.669816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.670086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.670151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.670406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.670472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.670733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.670798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.671062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.671126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.671387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.671454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.671699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.671767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.672011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.672075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.672401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.672462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.672702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.672761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.673066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.673131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.673410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.673477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.673769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.673833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.674121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.674185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.674430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.674493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.674768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.674807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.674938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.674979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.675202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.675268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.675576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.675640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.675886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.675952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.676139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.676208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.676478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.676592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.676888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.676953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.677207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.677272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.677548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.677608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.677897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.677961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.678250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.678347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.678653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.678717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.678959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.679019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.679266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.679349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.679653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.679698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.365 [2024-11-19 10:56:07.679836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.365 [2024-11-19 10:56:07.679875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.365 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.680061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.680137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.680429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.680496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.680723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.680786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.681024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.681089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.681384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.681450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.681668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.681732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.681935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.682003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.682287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.682370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.682619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.682685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.682920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.682984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.683230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.683295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.683600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.683665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.683927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.683993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.684248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.684333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.684577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.684642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.684890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.684955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.685218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.685278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.685489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.685550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.685845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.685911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.686195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.686260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.686596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.686662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.686955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.687019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.687267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.687356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.687621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.687686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.687982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.688048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.688343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.688409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.688671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.688736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.366 qpair failed and we were unable to recover it. 00:28:20.366 [2024-11-19 10:56:07.688994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.366 [2024-11-19 10:56:07.689059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.689342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.689409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.689661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.689727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.689958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.690023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.690279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.690367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.690622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.690687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.690898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.690968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.691248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.691325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.691595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.691660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.691916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.691982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.692242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.692327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.692618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.692695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.692951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.693052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.693346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.693447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.693757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.693818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.694069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.694134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.694356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.694423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.694674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.694739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.694988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.695055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.695342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.695408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.695666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.695731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.696032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.696092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.696338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.696400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.696690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.696755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.697014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.697078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.697389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.697455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.697707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.697773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.698025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.698089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.698361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.698421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.698724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.698789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.699046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.699106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.367 [2024-11-19 10:56:07.699346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.367 [2024-11-19 10:56:07.699410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.367 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.699678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.699744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.700042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.700106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.700344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.700412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.700683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.700749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.701025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.701091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.701375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.701443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.701782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.701849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.702131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.702190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.702416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.702478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.702720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.702784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.703076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.703138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.703405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.703468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.703724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.703790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.704035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.704101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.704389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.704457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.704687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.704751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.705040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.705105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.705393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.705460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.705715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.705780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.706029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.706106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.706357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.706424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.706662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.706728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.706980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.707049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.707292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.707374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.707688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.707752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.708044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.708108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.708387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.708456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.708696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.708762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.708971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.709036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.709255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.709338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.368 [2024-11-19 10:56:07.709602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.368 [2024-11-19 10:56:07.709670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.368 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.709885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.709951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.710241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.710341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.710645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.710711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.710993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.711058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.711347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.711414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.711657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.711698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.711859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.711899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.712188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.712253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.712533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.712601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.712820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.712887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.713173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.713238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.713541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.713607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.713852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.713917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.714217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.714256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.714412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.714452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.714612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.714680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.714946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.715011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.715251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.715337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.715567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.715642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.715911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.715977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.716192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.716257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.716498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.716566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.716861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.716900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.717015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.717057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.717212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.717252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.717571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.717636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.717862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.717927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.718181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.718246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.718511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.718590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.718859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.369 [2024-11-19 10:56:07.718924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.369 qpair failed and we were unable to recover it. 00:28:20.369 [2024-11-19 10:56:07.719163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.719229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.719512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.719582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.719880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.719946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.720218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.720257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.720455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.720528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.720780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.720848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.721136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.721201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.721506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.721572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.721818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.721883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.722184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.722249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.722530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.722595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.722880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.722945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.723213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.723279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.723551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.723616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.723829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.723870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.724059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.724134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.724381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.724449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.724707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.724771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.725030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.725094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.725349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.725416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.725697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.725762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.726015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.726082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.726340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.726406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.726694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.726759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.726953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.727017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.727224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.727290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.727588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.727653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.727952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.727991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.728145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.728184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.728483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.728550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.728895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.728960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.370 [2024-11-19 10:56:07.729241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.370 [2024-11-19 10:56:07.729319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.370 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.729549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.729615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.729815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.729881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.730149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.730215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.730444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.730512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.730767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.730834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.731081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.731147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.731390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.731468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.731721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.731787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.732076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.732142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.732447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.732514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.732772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.732837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.733109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.733174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.733468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.733534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.733783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.733851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.734165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.734232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.734543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.734611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.734862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.734926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.735135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.735201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.735513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.735579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.735833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.735898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.736160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.736227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.736548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.736588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.736743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.736781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.736994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.737058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.737351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.737420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.737701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.737766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.371 qpair failed and we were unable to recover it. 00:28:20.371 [2024-11-19 10:56:07.737978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.371 [2024-11-19 10:56:07.738042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.738332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.738402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.738654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.738719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.739006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.739070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.739325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.739391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.739646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.739711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.739951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.739993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.740205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.740278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.740383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.740409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.740497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.740523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.740655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.740698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.740874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.740917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.741057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.741102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.741336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.741383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.741477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.741504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.741623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.741649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.741742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.741768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.741885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.741910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.742049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.742075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.742159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.742186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.742275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.742313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.742431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.742457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.742574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.742608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.742704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.742738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.742905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.742950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.743237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.743271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.743435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.743463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.743549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.743575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.743671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.743697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.743808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.743858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.744034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.744096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.372 [2024-11-19 10:56:07.744352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.372 [2024-11-19 10:56:07.744378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.372 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.744478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.744504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.744617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.744662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.744947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.745011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.745259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.745338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.745487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.745513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.745623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.745657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.745806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.745840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.745978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.746012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.746212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.746238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.746446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.746473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.746556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.746582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.746692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.746718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.746794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.746836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.747002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.747036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.747264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.747297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.747431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.747458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.747540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.747566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.747657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.747700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.747895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.747960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.748192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.748258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.748439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.748465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.748560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.748586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.748835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.748900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.749176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.749240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.749444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.749470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.749556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.749581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.749740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.749774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.749911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.749945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.750096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.750146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.750386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.750413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.750501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.373 [2024-11-19 10:56:07.750528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.373 qpair failed and we were unable to recover it. 00:28:20.373 [2024-11-19 10:56:07.750691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.750734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.750916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.750950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.751163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.751238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.751450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.751476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.751565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.751592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.751750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.751795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.751968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.752010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.752164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.752206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.752357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.752384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.752474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.752500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.752647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.752681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.752858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.752892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.753037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.753071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.753381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.753408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.753501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.753528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.753620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.753664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.753815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.753850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.753966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.754001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.754230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.754264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.754484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.754511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.754629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.754655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.754741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.754767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.754850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.754876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.754960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.754985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.374 [2024-11-19 10:56:07.755128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.374 [2024-11-19 10:56:07.755219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.374 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.755453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.755481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.755571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.755597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.755706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.755732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.755956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.756022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.756261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.756287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.756384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.756410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.756515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.756540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.756681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.756706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.756818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.756888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.757142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.757192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.757383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.757409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.757510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.757543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.757703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.757769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.757905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.757938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.758119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.758145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.758278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.758310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.758393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.758420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.758551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.758584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.758836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.758879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.759063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.759096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.759295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.759328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.759412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.759438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.759562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.759587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.759676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.759703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.759815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.759840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.759955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.759981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.760069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.760095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.760171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.760196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.375 [2024-11-19 10:56:07.760299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.375 [2024-11-19 10:56:07.760333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.375 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.760419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.760444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.760552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.760585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.760758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.760825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.761075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.761109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.761250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.761283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.761504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.761547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.761819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.761852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.761990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.762023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.762165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.762199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.762421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.762456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.762625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.762659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.762779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.762814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.763064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.763097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.763214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.763249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.763396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.763440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.763646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.763680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.763868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.763926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.764200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.764233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.764384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.764418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.764579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.764621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.764767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.764812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.765016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.765050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.765192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.765226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.765372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.765429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.765586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.765628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.765753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.765808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.765976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.766009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.766248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.766280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.766436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.766469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.766589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.766623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.768297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.376 [2024-11-19 10:56:07.768384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.376 qpair failed and we were unable to recover it. 00:28:20.376 [2024-11-19 10:56:07.768530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.377 [2024-11-19 10:56:07.768561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.377 qpair failed and we were unable to recover it. 00:28:20.377 [2024-11-19 10:56:07.768695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.377 [2024-11-19 10:56:07.768727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.377 qpair failed and we were unable to recover it. 00:28:20.377 [2024-11-19 10:56:07.768826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.377 [2024-11-19 10:56:07.768857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.377 qpair failed and we were unable to recover it. 00:28:20.377 [2024-11-19 10:56:07.768990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.377 [2024-11-19 10:56:07.769021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.377 qpair failed and we were unable to recover it. 00:28:20.377 [2024-11-19 10:56:07.769183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.377 [2024-11-19 10:56:07.769214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.377 qpair failed and we were unable to recover it. 00:28:20.377 [2024-11-19 10:56:07.769327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.377 [2024-11-19 10:56:07.769359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.377 qpair failed and we were unable to recover it. 00:28:20.377 [2024-11-19 10:56:07.769464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.377 [2024-11-19 10:56:07.769494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.377 qpair failed and we were unable to recover it. 00:28:20.377 [2024-11-19 10:56:07.769661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.377 [2024-11-19 10:56:07.769692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.377 qpair failed and we were unable to recover it. 00:28:20.377 [2024-11-19 10:56:07.769850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.377 [2024-11-19 10:56:07.769881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.377 qpair failed and we were unable to recover it. 00:28:20.377 [2024-11-19 10:56:07.770014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.377 [2024-11-19 10:56:07.770045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.377 qpair failed and we were unable to recover it. 00:28:20.377 [2024-11-19 10:56:07.770153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.377 [2024-11-19 10:56:07.770184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.377 qpair failed and we were unable to recover it. 00:28:20.377 [2024-11-19 10:56:07.770291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.377 [2024-11-19 10:56:07.770329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.377 qpair failed and we were unable to recover it. 00:28:20.377 [2024-11-19 10:56:07.770434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.377 [2024-11-19 10:56:07.770464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.377 qpair failed and we were unable to recover it. 00:28:20.377 [2024-11-19 10:56:07.770586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.377 [2024-11-19 10:56:07.770617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.377 qpair failed and we were unable to recover it. 00:28:20.377 [2024-11-19 10:56:07.770724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.377 [2024-11-19 10:56:07.770755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.377 qpair failed and we were unable to recover it. 00:28:20.377 [2024-11-19 10:56:07.770859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.377 [2024-11-19 10:56:07.770891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.377 qpair failed and we were unable to recover it. 00:28:20.377 [2024-11-19 10:56:07.771027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.377 [2024-11-19 10:56:07.771057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.377 qpair failed and we were unable to recover it. 00:28:20.377 [2024-11-19 10:56:07.771192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.377 [2024-11-19 10:56:07.771223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.377 qpair failed and we were unable to recover it. 00:28:20.377 [2024-11-19 10:56:07.771356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.377 [2024-11-19 10:56:07.771388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.377 qpair failed and we were unable to recover it. 00:28:20.377 [2024-11-19 10:56:07.771516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.377 [2024-11-19 10:56:07.771552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.377 qpair failed and we were unable to recover it. 00:28:20.377 [2024-11-19 10:56:07.771644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.377 [2024-11-19 10:56:07.771674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.377 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.771785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.771816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.771955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.771985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.772116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.772146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.772277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.772324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.772428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.772459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.772588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.772620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.772716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.772747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.772885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.772916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.773005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.773035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.773170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.773200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.773339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.773371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.773459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.773489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.773633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.773664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.773799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.773830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.773935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.773966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.774078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.774109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.774223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.774253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.774368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.774400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.774508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.774538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.774638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.774670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.774827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.774857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.774982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.775013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.775116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.775147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.775278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.775314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.775415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.775447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.775590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.775621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.775755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.775787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.775891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.775922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.776044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.776075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.776206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.776236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.378 qpair failed and we were unable to recover it. 00:28:20.378 [2024-11-19 10:56:07.776389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.378 [2024-11-19 10:56:07.776438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.776590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.776642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.776798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.776828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.776955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.776986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.777113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.777144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.777245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.777276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.777396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.777428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.777528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.777559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.777684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.777721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.777859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.777889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.778021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.778052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.778178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.778208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.778309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.778340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.778434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.778464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.778600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.778630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.778763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.778795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.778953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.778984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.779117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.779147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.779280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.779319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.779425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.779455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.779550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.779581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.779692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.779722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.779824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.779854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.779978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.780009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.780112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.780142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.780244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.780274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.379 [2024-11-19 10:56:07.780398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.379 [2024-11-19 10:56:07.780428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.379 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.780556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.780586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.780721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.780750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.780850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.780880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.780975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.781006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.781109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.781138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.781240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.781270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.781410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.781440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.781539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.781568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.781670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.781699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.781830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.781860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.781995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.782024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.782125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.782156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.782256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.782285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.782412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.782442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.782567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.782597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.782711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.782740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.782835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.782865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.782955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.782984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.783114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.783143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.783263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.783293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.783414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.783443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.783547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.783581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.783680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.783708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.783831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.783860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.784012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.784040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.784164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.784194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.784330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.784360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.784481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.380 [2024-11-19 10:56:07.784509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.380 qpair failed and we were unable to recover it. 00:28:20.380 [2024-11-19 10:56:07.784634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.784662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.784758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.784787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.784885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.784914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.785038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.785066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.785193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.785221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.785376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.785405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.785505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.785533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.785627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.785656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.785787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.785816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.785912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.785940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.786060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.786088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.786187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.786215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.786309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.786354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.786451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.786480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.786604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.786631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.786726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.786753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.786834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.786861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.786950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.786977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.787068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.787096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.787177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.787204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.787313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.787342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.787437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.787463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.787556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.787582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.787664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.787691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.787778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.787804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.787905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.787932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.788045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.788072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.788212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.788238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.788341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.788369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.788456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.381 [2024-11-19 10:56:07.788483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.381 qpair failed and we were unable to recover it. 00:28:20.381 [2024-11-19 10:56:07.788573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.788600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.788716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.788742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.788861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.788887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.789030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.789063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.789172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.789199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.789332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.789360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.789476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.789503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.789633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.789660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.789762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.789789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.789952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.789980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.790107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.790149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.790236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.790262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.790388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.790415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.790505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.790530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.790632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.790659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.790740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.790766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.790901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.790926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.791050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.791075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.791213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.791239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.791371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.791397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.791489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.791515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.791648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.791674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.791781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.791807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.791915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.791941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.792036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.382 [2024-11-19 10:56:07.792062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.382 qpair failed and we were unable to recover it. 00:28:20.382 [2024-11-19 10:56:07.792176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.792202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.792319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.792362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.792446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.792471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.792565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.792590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.792683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.792708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.792843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.792870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.792989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.793016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.793127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.793154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.793239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.793265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.793378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.793404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.793515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.793540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.793621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.793646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.793731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.793756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.793848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.793875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.793989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.794014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.794127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.794154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.794236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.794262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.794378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.794404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.794481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.794515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.794601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.794627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.794740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.794766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.794882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.794907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.795000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.795026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.795163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.795189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.795348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.795388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.795504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.795532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.383 [2024-11-19 10:56:07.795651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.383 [2024-11-19 10:56:07.795678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.383 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.795795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.795822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.795905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.795931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.796044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.796071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.796165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.796192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.796280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.796316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.796406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.796433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.796522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.796548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.796661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.796687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.796771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.796797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.796884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.796911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.797001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.797026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.797135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.797161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.797247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.797273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.797358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.797384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.797501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.797527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.797622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.797648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.797740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.797766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.797878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.797904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.797992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.798020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.798108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.798134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.798216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.798243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.798334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.798360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.798474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.798501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.798618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.798644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.798731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.798757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.798843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.798870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.798969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.798996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.799116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.799141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.799231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.799258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.799356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.799384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.384 [2024-11-19 10:56:07.799480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.384 [2024-11-19 10:56:07.799506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.384 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.799618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.799648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.799767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.799794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.799915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.799941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.800028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.800054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.800158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.800184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.800325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.800352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.800435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.800461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.800579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.800606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.800713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.800740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.800827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.800853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.800964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.800991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.801103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.801129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.801210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.801236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.801327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.801354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.801440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.801466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.801560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.801586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.801704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.801730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.801802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.801828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.801909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.801935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.802019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.802045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.802160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.802187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.802264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.802290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.802405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.802443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.802546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.385 [2024-11-19 10:56:07.802573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.385 qpair failed and we were unable to recover it. 00:28:20.385 [2024-11-19 10:56:07.802712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.802738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.802850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.802877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.802965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.802992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.803079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.803105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.803194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.803220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.803312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.803339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.803422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.803447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.803530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.803556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.803670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.803696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.803780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.803805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.803956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.803984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.804099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.804125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.804239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.804267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.804361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.804389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.804475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.804503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.804590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.804616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.804692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.804722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.804806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.804832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.804917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.804943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.805052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.805078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.805184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.805210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.805288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.805326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.805417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.805444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.805540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.805566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.805649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.805675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.805766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.805791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.805868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.805894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.805968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.805993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.806106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.806132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.806239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.806265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.806371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.386 [2024-11-19 10:56:07.806398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.386 qpair failed and we were unable to recover it. 00:28:20.386 [2024-11-19 10:56:07.806481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.806507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.806589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.806615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.806721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.806747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.806834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.806861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.806999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.807025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.807114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.807140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.807229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.807256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.807342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.807369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.807460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.807487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.807567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.807609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.807730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.807757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.807845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.807871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.807921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9f30 (9): Bad file descriptor 00:28:20.387 [2024-11-19 10:56:07.808083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.808125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.808247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.808275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.808420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.808447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.808563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.808589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.808702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.808727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.808807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.808833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.808920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.808946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.809030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.809055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.809249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.809280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.809410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.809438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.809533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.809560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.809673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.809699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.809780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.809806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.809922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.809948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.810062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.810088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.810202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.810229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.387 [2024-11-19 10:56:07.810366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.387 [2024-11-19 10:56:07.810393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.387 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.810504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.810530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.810660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.810687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.810803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.810830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.810929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.810956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.811039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.811066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.811187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.811213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.811333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.811376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.811457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.811483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.811572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.811614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.811704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.811736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.811837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.811864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.811990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.812017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.812125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.812151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.812267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.812293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.812407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.812434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.812521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.812547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.812622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.812648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.812733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.812759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.812851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.812878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.812995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.813022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.813116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.813158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.813241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.813268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.813365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.813392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.813486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.813512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.813632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.813657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.813741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.813785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.388 qpair failed and we were unable to recover it. 00:28:20.388 [2024-11-19 10:56:07.813899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.388 [2024-11-19 10:56:07.813926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.814040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.814068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.814163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.814190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.814315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.814358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.814498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.814524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.814603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.814629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.814719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.814744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.814831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.814857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.814963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.814989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.815103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.815130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.815276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.815325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.815466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.815492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.815584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.815610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.815741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.815768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.815861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.815888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.816002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.816029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.816164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.816191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.816317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.816360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.816441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.816468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.816581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.816606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.816696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.816722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.816834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.816859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.816986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.817013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.817169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.817231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.817376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.817405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.817497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.817523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.389 [2024-11-19 10:56:07.817611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.389 [2024-11-19 10:56:07.817655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.389 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.817829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.817872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.818078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.818121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.818291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.818323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.818413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.818441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.818548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.818575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.818695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.818721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.818930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.818973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.819182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.819209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.819334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.819361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.819448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.819491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.819579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.819621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.819755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.819782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.819906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.819937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.820104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.820130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.820217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.820258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.820386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.820414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.820494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.820521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.820653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.820684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.820825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.820878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.820967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.820994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.821088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.821115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.821197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.821223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.821332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.821376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.821502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.821530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.390 [2024-11-19 10:56:07.821641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.390 [2024-11-19 10:56:07.821669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.390 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.821836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.821880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.822013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.822040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.822156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.822182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.822271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.822297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.822404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.822431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.822563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.822589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.822702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.822729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.822879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.822922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.823116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.823142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.823259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.823286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.823397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.823424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.823501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.823533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.823663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.823717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.823889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.823915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.824050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.824076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.824196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.824222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.824333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.824373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.824500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.824528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.824614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.824643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.824727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.824753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.824876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.824925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.825041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.825093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.825211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.825238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.825330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.825374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.825467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.825494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.825627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.825655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.825771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.825798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.825883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.825911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.826025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.826053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.391 [2024-11-19 10:56:07.826158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.391 [2024-11-19 10:56:07.826185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.391 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.826282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.826314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.826422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.826450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.826535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.826577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.826712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.826755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.826874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.826901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.827017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.827044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.827161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.827187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.827273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.827298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.827451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.827492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.827659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.827703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.827870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.827912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.828046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.828087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.828216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.828242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.828365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.828393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.828514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.828556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.828687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.828717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.828885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.828941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.829055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.829082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.829200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.829226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.829316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.829360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.829470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.829497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.829583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.829630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.829746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.829797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.829883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.829910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.830026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.830053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.830142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.830168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.830256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.830282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.830393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.830419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.830503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.830530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.830626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.830653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.830762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.392 [2024-11-19 10:56:07.830789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.392 qpair failed and we were unable to recover it. 00:28:20.392 [2024-11-19 10:56:07.830910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.830936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.831052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.831079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.831162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.831189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.831271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.831300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.831419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.831445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.831535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.831561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.831673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.831701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.831796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.831823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.831911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.831938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.832033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.832059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.832179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.832207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.832332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.832375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.832465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.832492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.832583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.832609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.832687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.832714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.832806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.832832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.832958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.832985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.833133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.833160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.833272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.833299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.833414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.833442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.833523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.833549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.833679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.833706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.833798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.833825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.833913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.833940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.834059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.834086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.393 [2024-11-19 10:56:07.834207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.393 [2024-11-19 10:56:07.834234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.393 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.834340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.834368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.834452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.834479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.834637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.834664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.834806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.834832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.834948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.834978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.835065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.835091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.835199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.835225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.835342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.835382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.835507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.835534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.835617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.835643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.835730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.835776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.835881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.835912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.836043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.836073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.836190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.836219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.836372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.836399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.836518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.836544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.836701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.836745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.836859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.836905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.837000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.837028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.837138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.837167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.837282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.837316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.837432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.837459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.837581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.837607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.837700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.837743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.837862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.837889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.838006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.838049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.394 [2024-11-19 10:56:07.838191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.394 [2024-11-19 10:56:07.838218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.394 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.838321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.838348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.838429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.838456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.838601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.838658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.838774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.838805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.838916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.838946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.839101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.839152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.839266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.839294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.839429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.839456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.839543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.839569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.839668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.839695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.839783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.839811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.839905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.839932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.840027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.840054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.840153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.840197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.840278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.840309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.840397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.840424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.840511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.840537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.840639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.840673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.840765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.840794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.840936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.840978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.841113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.841139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.841255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.841281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.841376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.841403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.841490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.841516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.841623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.841649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.841748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.841774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.841863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.841889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.841978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.395 [2024-11-19 10:56:07.842004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.395 qpair failed and we were unable to recover it. 00:28:20.395 [2024-11-19 10:56:07.842095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.842125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.842268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.842294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.842391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.842417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.842528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.842559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.842717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.842768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.842850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.842878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.843036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.843087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.843204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.843232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.843326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.843354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.843437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.843464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.843575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.843603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.843699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.843726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.843865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.843893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.843988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.844014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.844099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.844125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.844203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.844230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.844327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.844354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.844441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.844468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.844551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.844578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.844671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.844697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.844841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.844897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.844985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.845012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.845105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.845132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.845211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.845239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.845381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.845408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.845502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.845529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.845629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.845656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.845802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.845830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.845944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.396 [2024-11-19 10:56:07.845971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.396 qpair failed and we were unable to recover it. 00:28:20.396 [2024-11-19 10:56:07.846084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.846115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.846205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.846231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.846334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.846361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.846448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.846474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.846559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.846586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.846665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.846709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.846815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.846845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.846972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.847025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.847141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.847167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.847252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.847280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.847363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.847390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.847482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.847510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.847597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.847624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.847712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.847739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.847860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.847887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.847971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.847999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.848107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.848136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.848250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.848276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.848378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.848405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.848490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.848518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.848638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.848664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.848748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.848775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.848886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.848914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.849011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.849039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.849140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.849167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.849265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.849293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.849409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.849440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.849595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.849622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.849721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.849748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.849844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.849872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.849961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.849989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.397 qpair failed and we were unable to recover it. 00:28:20.397 [2024-11-19 10:56:07.850111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.397 [2024-11-19 10:56:07.850138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.850237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.850264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.850373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.850401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.850488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.850515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.850637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.850664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.850746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.850773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.850869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.850896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.851012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.851039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.851127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.851154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.851266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.851298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.851416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.851443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.851532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.851558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.851649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.851676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.851794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.851821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.851938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.851964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.852081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.852107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.852222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.852249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.852336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.852364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.852453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.852480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.852562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.852589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.852699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.852726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.852847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.852874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.852969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.852996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.853099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.853126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.853232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.853273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.853408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.853438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.853566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.853594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.853679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.853707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.853790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.853817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.853909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.853936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.854059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.854108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.398 qpair failed and we were unable to recover it. 00:28:20.398 [2024-11-19 10:56:07.854251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.398 [2024-11-19 10:56:07.854278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.854393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.854443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.854559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.854610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.854742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.854790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.854933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.854980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.855073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.855103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.855221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.855248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.855340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.855369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.855483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.855510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.855655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.855682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.855777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.855804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.855897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.855925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.856012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.856039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.856129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.856156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.856250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.856276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.856423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.856474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.856572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.856610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.856749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.856776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.856895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.856927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.857009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.857035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.857124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.857150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.857243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.857270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.857387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.857415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.857527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.857554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.857646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.857672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.857787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.857812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.857898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.857924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.858018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.858045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.858133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.858158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.858279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.858328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.858472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.858502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.858591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.858618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.399 [2024-11-19 10:56:07.858710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.399 [2024-11-19 10:56:07.858738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.399 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.858831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.858860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.859003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.859030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.859122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.859150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.859234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.859261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.859402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.859449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.859572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.859626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.859770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.859818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.859981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.860010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.860146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.860172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.860259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.860286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.860434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.860480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.860586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.860615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.860775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.860822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.860932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.860959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.861041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.861068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.861160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.861189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.861287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.861323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.861412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.861439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.861533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.861558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.861701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.861742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.861876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.861904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.861999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.862028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.862118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.862145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.862260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.862296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.862396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.862423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.862534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.862567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.862659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.862686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.862801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.862827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.862985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.863023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.863146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.863186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.863323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.863371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.863521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.863560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.863693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.863741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.400 qpair failed and we were unable to recover it. 00:28:20.400 [2024-11-19 10:56:07.863837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.400 [2024-11-19 10:56:07.863881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.864008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.864037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.864127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.864153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.864240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.864267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.864429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.864456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.864570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.864595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.864726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.864753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.864843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.864871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.864963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.864989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.865074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.865102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.865226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.865253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.865343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.865370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.865466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.865494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.865624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.865669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.865829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.865859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.865959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.865988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.866105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.866135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.866225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.866252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.866436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.866486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.866601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.866648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.866823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.866861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.867048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.867086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.867236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.867261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.867380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.867407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.867501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.867528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.867652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.867681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.867784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.867811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.867957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.867986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.868108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.868146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.868332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.401 [2024-11-19 10:56:07.868383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.401 qpair failed and we were unable to recover it. 00:28:20.401 [2024-11-19 10:56:07.868506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.868535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.868644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.868675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.868774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.868837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.869027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.869067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.869227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.869267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.869405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.869434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.869527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.869554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.869795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.869834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.869958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.869998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.870125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.870163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.870339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.870380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.870488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.870518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.870644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.870683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.870829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.870868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.870995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.871047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.871134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.871161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.871288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.871321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.871406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.871434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.871523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.871550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.871636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.871663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.871778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.871805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.871896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.871921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.872016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.872056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.872210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.872251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.872357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.872387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.872512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.872539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.872720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.872750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.872933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.872964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.873091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.873122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.873255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.873329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.873432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.873460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.873614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.873661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.873805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.873855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.873947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.873974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.874063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.874090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.874181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.402 [2024-11-19 10:56:07.874208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.402 qpair failed and we were unable to recover it. 00:28:20.402 [2024-11-19 10:56:07.874314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.874341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.874463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.874492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.874594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.874625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.874753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.874780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.874925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.874965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.875112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.875142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.875244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.875274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.875392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.875420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.875511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.875562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.875683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.875722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.875868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.875897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.876031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.876060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.876149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.876180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.876297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.876331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.876448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.876474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.876605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.876658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.876768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.876821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.876938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.876975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.877116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.877142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.877230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.877257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.877398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.877424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.877515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.877541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.877634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.877661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.877751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.877777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.877864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.877890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.877979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.878005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.878097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.878125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.878216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.878243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.878342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.878370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.878458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.878484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.878569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.878594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.878740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.878766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.878856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.878882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.879000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.879031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.879147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.879174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.879258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.879284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.403 qpair failed and we were unable to recover it. 00:28:20.403 [2024-11-19 10:56:07.879378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.403 [2024-11-19 10:56:07.879405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.879494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.879521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.879607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.879633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.879750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.879776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.879894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.879921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.880033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.880058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.880179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.880206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.880287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.880321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.880419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.880446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.880540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.880566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.880655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.880681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.880778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.880803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.880911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.880938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.881051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.881092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.881183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.881212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.881292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.881332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.881426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.881454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.881539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.881565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.881685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.881712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.881796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.881823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.881920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.881947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.882061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.882088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.882198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.882225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.882314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.882342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.882445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.882472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.882606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.882645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.882773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.882812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.882974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.883000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.883135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.883174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.883293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.883354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.883498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.883525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.883650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.883690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.883815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.883854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.883996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.884035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.884166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.884196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.884327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.884354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.404 [2024-11-19 10:56:07.884470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.404 [2024-11-19 10:56:07.884523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.404 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.884662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.884693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.884777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.884802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.884879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.884904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.884982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.885009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.885161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.885201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.885310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.885355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.885475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.885502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.885587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.885614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.885744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.885784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.885981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.886034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.886158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.886187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.886312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.886340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.886429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.886456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.886569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.886609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.886741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.886780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.886941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.886981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.887101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.887130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.887234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.887261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.887408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.887460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.887584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.887634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.887775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.887827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.887973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.888028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.888143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.888170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.888291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.888324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.888405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.888432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.888525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.888553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.888697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.888724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.888845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.888872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.888969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.888996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.889083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.889110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.889218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.889259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.889398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.889428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.889521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.889548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.889660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.889708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.889802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.889831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.889954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.405 [2024-11-19 10:56:07.889981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.405 qpair failed and we were unable to recover it. 00:28:20.405 [2024-11-19 10:56:07.890069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.890096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.890195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.890236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.890345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.890375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.890528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.890567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.890752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.890799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.890921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.890963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.891117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.891155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.891294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.891363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.891485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.891513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.891602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.891629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.891721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.891749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.891864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.891903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.892058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.892114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.892247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.892289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.892419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.892446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.892543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.892570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.892698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.892725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.892859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.892898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.893071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.893111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.893260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.893298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.893421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.893448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.893543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.893570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.893668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.893695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.893836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.893876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.894039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.894078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.894226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.894278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.894424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.894451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.894545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.894572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.894657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.894683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.894781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.894832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.895019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.895059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.406 [2024-11-19 10:56:07.895246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.406 [2024-11-19 10:56:07.895292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.406 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.895472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.895499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.895591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.895646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.895826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.895872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.896023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.896084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.896259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.896286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.896383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.896410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.896526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.896554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.896650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.896677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.896772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.896800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.896947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.896985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.897202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.897240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.897398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.897426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.897537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.897568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.897692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.897731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.897874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.897913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.898080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.898119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.898236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.898263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.898385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.898413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.898526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.898553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.898697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.898724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.898810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.898837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.898929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.898956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.899079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.899144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.899274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.899308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.899402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.899430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.899527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.899554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.899679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.899729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.899875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.899925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.900070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.900124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.900207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.900234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.900337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.900367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.900459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.900488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.900577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.900604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.900724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.900751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.900827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.900855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.900943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.900971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.901088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.901120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.901238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.901265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.407 [2024-11-19 10:56:07.901393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.407 [2024-11-19 10:56:07.901434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.407 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.901559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.901587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.901708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.901735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.901853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.901879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.901978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.902003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.902106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.902147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.902239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.902267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.902379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.902408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.902498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.902525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.902650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.902691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.902852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.902892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.903024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.903066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.903206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.903235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.903354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.903383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.903506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.903564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.903715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.903766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.903863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.903892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.904060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.904110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.904252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.904280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.904376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.904404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.904518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.904569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.904657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.904684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.904830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.904879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.905009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.905036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.905123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.905150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.905236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.905262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.905405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.905446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.905537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.905566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.905650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.905678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.905822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.905862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.905996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.906036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.906170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.906210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.906361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.906402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.906565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.906603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.906724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.906762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.906873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.906912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.907030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.907070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.907187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.907226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.907364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.907394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.408 qpair failed and we were unable to recover it. 00:28:20.408 [2024-11-19 10:56:07.907505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.408 [2024-11-19 10:56:07.907556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.907703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.907758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.907946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.907998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.908090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.908117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.908232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.908259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.908350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.908379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.908474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.908501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.908624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.908675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.908868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.908914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.909054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.909114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.909259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.909298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.909439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.909496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.909659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.909705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.909901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.909944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.910117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.910157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.910294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.910380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.910469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.910496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.910602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.910641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.910764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.910804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.910922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.910961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.911084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.911123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.911270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.911318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.911460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.911487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.911629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.911655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.911747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.911774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.911855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.911882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.911997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.912036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.912192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.912231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.912372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.912399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.912520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.912550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.912714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.912763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.912866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.912905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.913074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.913128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.913229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.913256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.913372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.913398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.913489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.913516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.913616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.913642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.913735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.913761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.913854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.913881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.409 [2024-11-19 10:56:07.913966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.409 [2024-11-19 10:56:07.913993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.409 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.914079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.914105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.914194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.914222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.914316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.914343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.914442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.914468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.914553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.914579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.914691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.914717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.914830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.914857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.914967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.914993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.915081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.915108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.915186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.915212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.915333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.915362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.915477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.915503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.915626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.915653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.915741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.915767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.915882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.915909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.916000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.916030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.916153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.916179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.916276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.916327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.916425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.916454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.916544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.916572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.916659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.916686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.916770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.916797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.916889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.916916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.917029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.917056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.917179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.917205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.917316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.917367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.917531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.917569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.917700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.917739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.917858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.917896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.918024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.918062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.918223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.918250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.918390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.918431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.918556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.918585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.918683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.918710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.918879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.918920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.919058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.919097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.410 [2024-11-19 10:56:07.919228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.410 [2024-11-19 10:56:07.919267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.410 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.919423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.919451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.919536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.919562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.919650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.919676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.919768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.919814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.919994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.920032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.920173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.920212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.920392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.920419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.920515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.920541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.920664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.920691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.920850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.920889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.921049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.921087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.921245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.921283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.921450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.921479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.921577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.921604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.921714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.921753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.921914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.921953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.922107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.922146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.922267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.922315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.922459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.922492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.922635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.922674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.922823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.922861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.922979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.923017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.923218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.923255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.923385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.923412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.411 [2024-11-19 10:56:07.923496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.411 [2024-11-19 10:56:07.923522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.411 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.923699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.923747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.923976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.924021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.924210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.924254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.924448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.924488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.924610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.924667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.924797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.924849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.924963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.925017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.925118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.925144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.925246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.925287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.925400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.925429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.925548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.925575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.925661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.925690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.925792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.925821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.925937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.925964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.926066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.926095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.926180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.926206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.926290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.926327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.926417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.926443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.926520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.926547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.926700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.926749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.926843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.926873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.926961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.926988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.927084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.927111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.927220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.927247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.927379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.927407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.927521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.412 [2024-11-19 10:56:07.927548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.412 qpair failed and we were unable to recover it. 00:28:20.412 [2024-11-19 10:56:07.927668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.927707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.927891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.927930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.928087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.928126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.928254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.928281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.928383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.928411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.928494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.928521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.928673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.928712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.928840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.928888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.929078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.929118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.929237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.929264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.929390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.929418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.929508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.929535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.929622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.929649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.929731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.929758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.929895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.929933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.930102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.930140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.930278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.930325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.930448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.930475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.930568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.930594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.930710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.930737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.930874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.930913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.931100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.931139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.931258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.931296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.931416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.931443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.931533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.931561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.931658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.413 [2024-11-19 10:56:07.931686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-11-19 10:56:07.931809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.931850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.932029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.932070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.932194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.932233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.932379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.932408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.932522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.932549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.932648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.932675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.932762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.932789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.932940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.932979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.933118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.933158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.933285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.933332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.933432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.933460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.933541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.933568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.933677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.933718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.933906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.933945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.934076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.934115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.934250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.934291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.934445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.934472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.934560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.934587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.934745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.934784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.934907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.934960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.935088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.935128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.935274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.935353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.935452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.935481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.935592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.935620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.935712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.414 [2024-11-19 10:56:07.935739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-11-19 10:56:07.935852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.935891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.936016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.936056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.936182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.936222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.936380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.936408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.936525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.936551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.936746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.936785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.937001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.937042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.937164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.937203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.937354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.937382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.937473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.937500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.937595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.937624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.937769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.937796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.937917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.937957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.938088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.938127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.938247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.938286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.938415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.938443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.938540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.938567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.938649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.938698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.938839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.938879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.939012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.939052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.939175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.939214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.939354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.939395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.939522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.939549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.939705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.939755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.939916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.939968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.940051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.415 [2024-11-19 10:56:07.940077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.415 qpair failed and we were unable to recover it. 00:28:20.415 [2024-11-19 10:56:07.940199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.940225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.940376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.940417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.940542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.940581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.940750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.940789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.940915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.940953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.941082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.941122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.941253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.941292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.941418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.941446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.941555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.941582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.941710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.941750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.941881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.941936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.942073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.942112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.942266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.942312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.942449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.942476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.942570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.942596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.942717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.942757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.942964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.943003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.943130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.943170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.943344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.943371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.943487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.943514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.943610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.943638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.943807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.943847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.943981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.944033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.944183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.944222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.944393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.944421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.944507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.944535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.416 [2024-11-19 10:56:07.944645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.416 [2024-11-19 10:56:07.944672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.416 qpair failed and we were unable to recover it. 00:28:20.417 [2024-11-19 10:56:07.944782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.417 [2024-11-19 10:56:07.944821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.417 qpair failed and we were unable to recover it. 00:28:20.417 [2024-11-19 10:56:07.945001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.417 [2024-11-19 10:56:07.945059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.417 qpair failed and we were unable to recover it. 00:28:20.417 [2024-11-19 10:56:07.945253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.417 [2024-11-19 10:56:07.945296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.417 qpair failed and we were unable to recover it. 00:28:20.417 [2024-11-19 10:56:07.945455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.417 [2024-11-19 10:56:07.945482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.417 qpair failed and we were unable to recover it. 00:28:20.417 [2024-11-19 10:56:07.945593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.417 [2024-11-19 10:56:07.945620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.417 qpair failed and we were unable to recover it. 00:28:20.417 [2024-11-19 10:56:07.945714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.417 [2024-11-19 10:56:07.945741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.417 qpair failed and we were unable to recover it. 00:28:20.417 [2024-11-19 10:56:07.945854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.417 [2024-11-19 10:56:07.945893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.417 qpair failed and we were unable to recover it. 00:28:20.417 [2024-11-19 10:56:07.946067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.417 [2024-11-19 10:56:07.946106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.417 qpair failed and we were unable to recover it. 00:28:20.417 [2024-11-19 10:56:07.946283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.417 [2024-11-19 10:56:07.946316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.417 qpair failed and we were unable to recover it. 00:28:20.417 [2024-11-19 10:56:07.946413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.417 [2024-11-19 10:56:07.946440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.417 qpair failed and we were unable to recover it. 00:28:20.417 [2024-11-19 10:56:07.946545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.417 [2024-11-19 10:56:07.946586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.417 qpair failed and we were unable to recover it. 00:28:20.417 [2024-11-19 10:56:07.946709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.417 [2024-11-19 10:56:07.946736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.417 qpair failed and we were unable to recover it. 00:28:20.417 [2024-11-19 10:56:07.946819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.417 [2024-11-19 10:56:07.946846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.417 qpair failed and we were unable to recover it. 00:28:20.417 [2024-11-19 10:56:07.946938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.417 [2024-11-19 10:56:07.946964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.417 qpair failed and we were unable to recover it. 00:28:20.417 [2024-11-19 10:56:07.947057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.417 [2024-11-19 10:56:07.947085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.417 qpair failed and we were unable to recover it. 00:28:20.417 [2024-11-19 10:56:07.947216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.417 [2024-11-19 10:56:07.947243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.417 qpair failed and we were unable to recover it. 00:28:20.417 [2024-11-19 10:56:07.947349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.417 [2024-11-19 10:56:07.947376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.417 qpair failed and we were unable to recover it. 00:28:20.417 [2024-11-19 10:56:07.947498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.417 [2024-11-19 10:56:07.947524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.417 qpair failed and we were unable to recover it. 00:28:20.417 [2024-11-19 10:56:07.947607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.417 [2024-11-19 10:56:07.947634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.417 qpair failed and we were unable to recover it. 00:28:20.417 [2024-11-19 10:56:07.947732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.417 [2024-11-19 10:56:07.947762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.417 qpair failed and we were unable to recover it. 00:28:20.417 [2024-11-19 10:56:07.947843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.417 [2024-11-19 10:56:07.947870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.417 qpair failed and we were unable to recover it. 00:28:20.417 [2024-11-19 10:56:07.947953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.417 [2024-11-19 10:56:07.947980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.417 qpair failed and we were unable to recover it. 00:28:20.418 [2024-11-19 10:56:07.948078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.418 [2024-11-19 10:56:07.948105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.418 qpair failed and we were unable to recover it. 00:28:20.418 [2024-11-19 10:56:07.948198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.418 [2024-11-19 10:56:07.948234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.418 qpair failed and we were unable to recover it. 00:28:20.418 [2024-11-19 10:56:07.948363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.418 [2024-11-19 10:56:07.948403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.418 qpair failed and we were unable to recover it. 00:28:20.418 [2024-11-19 10:56:07.948527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.418 [2024-11-19 10:56:07.948584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.418 qpair failed and we were unable to recover it. 00:28:20.418 [2024-11-19 10:56:07.948710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.418 [2024-11-19 10:56:07.948757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.418 qpair failed and we were unable to recover it. 00:28:20.418 [2024-11-19 10:56:07.948884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.418 [2024-11-19 10:56:07.948923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.418 qpair failed and we were unable to recover it. 00:28:20.418 [2024-11-19 10:56:07.949038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.418 [2024-11-19 10:56:07.949087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.418 qpair failed and we were unable to recover it. 00:28:20.418 [2024-11-19 10:56:07.949214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.418 [2024-11-19 10:56:07.949252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.418 qpair failed and we were unable to recover it. 00:28:20.418 [2024-11-19 10:56:07.949378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.418 [2024-11-19 10:56:07.949410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.418 qpair failed and we were unable to recover it. 00:28:20.418 [2024-11-19 10:56:07.949581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.418 [2024-11-19 10:56:07.949620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.418 qpair failed and we were unable to recover it. 00:28:20.418 [2024-11-19 10:56:07.949745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.418 [2024-11-19 10:56:07.949785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.418 qpair failed and we were unable to recover it. 00:28:20.418 [2024-11-19 10:56:07.949910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.418 [2024-11-19 10:56:07.949949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.418 qpair failed and we were unable to recover it. 00:28:20.418 [2024-11-19 10:56:07.950108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.418 [2024-11-19 10:56:07.950135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.418 qpair failed and we were unable to recover it. 00:28:20.418 [2024-11-19 10:56:07.950224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.418 [2024-11-19 10:56:07.950251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.418 qpair failed and we were unable to recover it. 00:28:20.418 [2024-11-19 10:56:07.950362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.950389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.950517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.950556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.950713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.950752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.950901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.950940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.951083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.951123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.951246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.951273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.951404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.951431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.951538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.951583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.951760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.951827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.951974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.952012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.952145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.952174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.952297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.952332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.952429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.952457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.952579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.952620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.952752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.952805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.952937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.952989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.953110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.953160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.953258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.953286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.953398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.953426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.953524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.953572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.953736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.953775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.953909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.953939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.954085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.954127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.954285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.954320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.954435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.954463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.954590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.954629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.954750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.954788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.954939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.954982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.955184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.955212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.955316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.955344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.955438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.955467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.955595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.955648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.955822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.955850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.956002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.956030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.956163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.956198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.956291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.956333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.956473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.956501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.956656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.956696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.706 [2024-11-19 10:56:07.956860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.706 [2024-11-19 10:56:07.956899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.706 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.957059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.957099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.957257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.957283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.957394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.957422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.957523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.957550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.957773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.957812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.957933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.957986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.958117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.958156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.958300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.958334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.958421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.958448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.958565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.958591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.958679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.958708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.958800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.958827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.958987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.959026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.959153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.959180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.959258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.959285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.959405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.959436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.959559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.959586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.959674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.959702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.959864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.959905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.960036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.960085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.960246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.960285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.960436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.960463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.960577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.960604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.960726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.960753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.960893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.960933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.961111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.961151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.961316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.961366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.961458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.961485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.961593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.961620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.961703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.961731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.961817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.961843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.961930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.961957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.962050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.962077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.962169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.962197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.962325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.962367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.962561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.962600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.962725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.962767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.962924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.962962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.963068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.963108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.707 [2024-11-19 10:56:07.963265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.707 [2024-11-19 10:56:07.963311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.707 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.963432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.963472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.963588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.963627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.963822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.963862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.963981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.964021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.964151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.964190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.964343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.964383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.964565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.964605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.964754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.964793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.964958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.964997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.965153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.965194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.965350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.965391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.965506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.965546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.965677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.965718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.965874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.965913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.966101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.966140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.966261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.966316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.966483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.966523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.966638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.966677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.966810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.966850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.966977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.967018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.967134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.967175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.967320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.967366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.967511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.967551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.967737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.967776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.967939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.967978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.968116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.968155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.968275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.968322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.968479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.968518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.968665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.968705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.968836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.968877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.969017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.969056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.969188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.969227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.969391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.969431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.969594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.969633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.969786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.969825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.969985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.970024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.970178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.970217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.970341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.970381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.970548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.970587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.970784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.970824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.970985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.971024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.971147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.971186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.971351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.971393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.971531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.708 [2024-11-19 10:56:07.971570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.708 qpair failed and we were unable to recover it. 00:28:20.708 [2024-11-19 10:56:07.971726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.971765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.971922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.971961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.972107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.972146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.972275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.972322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.972480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.972520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.972674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.972713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.972876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.972915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.973040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.973079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.973198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.973236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.973430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.973469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.973602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.973641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.973791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.973836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.973961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.974000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.974156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.974195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.974321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.974361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.974519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.974558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.974708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.974747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.974897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.974936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.975125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.975164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.975323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.975362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.975488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.975527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.975713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.975752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.975886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.975926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.976091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.976130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.976265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.976312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.976481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.976520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.976640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.976679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.976836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.976875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.976988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.977027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.977150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.977189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.977351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.977392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.977525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.977564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.977720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.977760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.977894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.977933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.978096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.978135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.978256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.978295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.978471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.978511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.978677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.978716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.978915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.978956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.979148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.979188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.979339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.979397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.979595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.979637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.979820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.979859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.979986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.980025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.980213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.980254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.980412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.709 [2024-11-19 10:56:07.980456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.709 qpair failed and we were unable to recover it. 00:28:20.709 [2024-11-19 10:56:07.980624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.980665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.980837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.980879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.981049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.981090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.981227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.981269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.981406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.981450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.981613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.981667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.981828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.981869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.982017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.982058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.982225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.982266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.982405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.982447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.982599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.982640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.982764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.982806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.982967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.983008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.983146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.983186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.983348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.983390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.983531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.983574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.983734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.983775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.983899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.983941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.984135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.984177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.984359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.984400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.984564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.984605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.984736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.984778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.984908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.984949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.985102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.985142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.985326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.985368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.985520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.985562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.985724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.985766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.985938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.985979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.986109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.986150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.986321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.986363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.986489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.986531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.986667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.986709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.986867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.986908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.987075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.987116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.987244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.987286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.987456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.987498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.987672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.987713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.987824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.987866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.988030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.988071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.988219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.988262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.988477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.988520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.988730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.710 [2024-11-19 10:56:07.988773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.710 qpair failed and we were unable to recover it. 00:28:20.710 [2024-11-19 10:56:07.988931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.988972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.989128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.989168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.989334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.989376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.989538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.989587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.989760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.989801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.989923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.989964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.990089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.990130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.990324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.990366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.990503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.990544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.990669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.990709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.990836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.990876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.991064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.991105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.991276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.991331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.991470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.991511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.991651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.991691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.991862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.991903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.992032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.992073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.992211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.992253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.992383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.992426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.992565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.992606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.992749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.992792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.992983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.993026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.993191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.993233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.993384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.993426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.993589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.993630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.993766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.993810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.993932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.993975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.994121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.994162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.994341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.994383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.994551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.994594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.994754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.994799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.994950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.994993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.995171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.995214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.995409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.995453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.995614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.995660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.995865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.995906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.996062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.996103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.996267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.996316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.996441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.996482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.996616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.996658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.996826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.996867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.996996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.997037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.997194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.997236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.997385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.997442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.997598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.997639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.711 [2024-11-19 10:56:07.997827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.711 [2024-11-19 10:56:07.997869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.711 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:07.998026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:07.998067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:07.998225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:07.998265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:07.998464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:07.998506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:07.998638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:07.998679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:07.998799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:07.998840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:07.998996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:07.999039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:07.999214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:07.999254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:07.999445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:07.999488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:07.999678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:07.999726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:07.999903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:07.999949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.000112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.000175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.000376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.000420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.000573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.000634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.000868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.000914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.001077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.001119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.001288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.001338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.001478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.001519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.001703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.001746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.001915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.001956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.002123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.002164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.002282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.002334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.002501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.002544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.002717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.002758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.002888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.002929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.003159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.003200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.003380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.003422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.003622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.003679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.003834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.003882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.004094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.004140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.004339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.004389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.004557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.004599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.004717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.004758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.004919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.004960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.005133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.005175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.005369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.005411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.005571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.005613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.005753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.005794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.005959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.006008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.006182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.006223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.006398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.006446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.006624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.006670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.006821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.006867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.007054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.007096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.007266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.007315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.007493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.007536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.007687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.712 [2024-11-19 10:56:08.007729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.712 qpair failed and we were unable to recover it. 00:28:20.712 [2024-11-19 10:56:08.007855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.007896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.008067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.008107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.008225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.008266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.008434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.008476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.008602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.008643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.008844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.008885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.009044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.009085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.009239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.009279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.009471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.009513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.009643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.009685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.009848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.009890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.010047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.010093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.010314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.010361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.010529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.010570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.010781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.010843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.010977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.011018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.011157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.011198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.011339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.011382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.011528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.011569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.011739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.011780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.011913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.011955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.012080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.012121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.012282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.012331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.012475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.012519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.012653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.012696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.012875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.012920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.013062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.013109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.013264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.013320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.013508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.013554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.013736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.013782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.013960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.014006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.014142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.014214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.014408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.014455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.014595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.014640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.014828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.014875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.015026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.015072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.015280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.015352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.015527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.015574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.015754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.015799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.015944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.015990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.016118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.713 [2024-11-19 10:56:08.016164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.713 qpair failed and we were unable to recover it. 00:28:20.713 [2024-11-19 10:56:08.016331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.016379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.016590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.016636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.016781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.016827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.016987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.017033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.017232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.017278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.017462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.017508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.017654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.017700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.017884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.017930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.018054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.018101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.018277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.018335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.018493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.018539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.018708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.018754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.018898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.018946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.019115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.019161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.019347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.019395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.019543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.019590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.019776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.019822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.019977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.020023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.020173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.020220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.020442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.020490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.020641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.020688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.020839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.020885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.021028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.021076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.021258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.021325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.021517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.021565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.021767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.021813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.021990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.022036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.022180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.022226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.022418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.022464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.022652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.022723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.022935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.022993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.023129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.023175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.023346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.023393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.023534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.023581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.023728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.023775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.023961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.024008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.024217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.024262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.024437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.024506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.024682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.024728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.024871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.024917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.025049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.025095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.025263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.025318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.025511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.025558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.025734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.025781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.714 [2024-11-19 10:56:08.025935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.714 [2024-11-19 10:56:08.025983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.714 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.026122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.026168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.026298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.026357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.026544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.026590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.026797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.026860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.027050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.027096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.027275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.027338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.027523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.027585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.027787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.027853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.028033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.028079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.028224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.028270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.028426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.028473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.028625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.028670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.028901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.028948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.029169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.029215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.029404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.029454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.029599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.029645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.029861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.029907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.030061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.030106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.030247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.030294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.030468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.030532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.030693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.030739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.030896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.030944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.031131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.031177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.031358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.031406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.031587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.031635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.031778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.031832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.032011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.032059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.032217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.032263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.032456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.032504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.032698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.032744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.032932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.032978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.033148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.033195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.033349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.033396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.033583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.033630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.033781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.033827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.034005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.034052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.034258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.034327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.034494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.034558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.034795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.034859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.035058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.035106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.035256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.035314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.715 qpair failed and we were unable to recover it. 00:28:20.715 [2024-11-19 10:56:08.035526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.715 [2024-11-19 10:56:08.035591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.035793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.035858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.036057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.036103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.036245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.036291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.036497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.036563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.036692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.036739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.036907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.036976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.037160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.037206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.037375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.037439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.037619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.037686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.037910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.037957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.038115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.038161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.038342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.038389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.038523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.038569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.038721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.038767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.038950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.038996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.039161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.039208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.039390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.039436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.039618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.039664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.039815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.039861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.040042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.040088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.040252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.040298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.040518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.040581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.040784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.040848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.040999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.041052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.041209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.041254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.041415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.041462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.041612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.041660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.041865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.041911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.042065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.042116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.042251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.042296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.042487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.042534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.042723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.042769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.042928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.042973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.043165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.043212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.043433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.043500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.043677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.043741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.043928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.043974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.044166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.044214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.044419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.044492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.044705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.044772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.716 [2024-11-19 10:56:08.044985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.716 [2024-11-19 10:56:08.045031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.716 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.045179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.045224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.045425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.045472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.045685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.045731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.045874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.045920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.046087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.046133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.046290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.046344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.046488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.046534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.046672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.046718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.046932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.046978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.047161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.047209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.047392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.047439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.047637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.047703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.047864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.047910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.048124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.048170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.048326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.048373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.048550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.048622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.048812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.048874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.049052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.049098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.049240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.049288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.049503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.049568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.049803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.049871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.050046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.050091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.050252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.050326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.050489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.050557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.050753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.050818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.051002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.051050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.051201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.051247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.051412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.051459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.051608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.051654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.051872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.051917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.052089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.052136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.052273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.052327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.052534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.052580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.052734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.052781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.052957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.053004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.053190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.053236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.717 qpair failed and we were unable to recover it. 00:28:20.717 [2024-11-19 10:56:08.053450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.717 [2024-11-19 10:56:08.053498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.053681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.053727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.053881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.053926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.054119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.054165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.054355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.054403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.054609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.054655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.054800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.054848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.055011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.055057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.055239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.055284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.055509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.055573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.055735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.055809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.055978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.056023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.056178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.056224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.056453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.056518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.056724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.056789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.057001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.057047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.057209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.057255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.057454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.057501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.057681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.057727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.057939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.057984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.058163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.058209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.058422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.058487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.058708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.058772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.058951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.058996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.059149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.059195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.059379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.059447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.059652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.059717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.059925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.059971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.060150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.060196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.060336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.060385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.060570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.060636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.060816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.060880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.061063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.061109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.061295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.061350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.061552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.061620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.061827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.061891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.062076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.062124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.062329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.062376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.062602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.062666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.062902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.062965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.063165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.063211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.063423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.718 [2024-11-19 10:56:08.063490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.718 qpair failed and we were unable to recover it. 00:28:20.718 [2024-11-19 10:56:08.063659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.063725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.063892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.063965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.064150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.064197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.064406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.064477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.064687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.064763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.064950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.064996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.065132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.065180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.065368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.065416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.065620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.065687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.065848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.065894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.066048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.066096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.066244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.066298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.066488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.066535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.066744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.066790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.066972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.067019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.067190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.067237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.067385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.067432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.067612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.067658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.067841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.067887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.068036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.068083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.068219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.068265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.068420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.068469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.068695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.068741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.068889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.068934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.069116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.069162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.069367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.069414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.069591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.069636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.069792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.069840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.070007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.070053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.070233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.070280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.070445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.070491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.070641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.070687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.070865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.070910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.071094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.071142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.071288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.071347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.071504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.071549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.071688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.071735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.071921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.071968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.072122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.072169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.072345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.072393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.072577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.072623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.719 [2024-11-19 10:56:08.072794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.719 [2024-11-19 10:56:08.072839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.719 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.073011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.073057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.073225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.073270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.073459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.073533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.073754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.073821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.074001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.074048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.074199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.074245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.074427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.074474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.074647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.074692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.074881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.074926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.075082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.075136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.075335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.075382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.075567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.075613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.075794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.075840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.075994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.076039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.076175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.076220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.076439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.076486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.076635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.076681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.076894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.076940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.077121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.077169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.077329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.077376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.077528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.077575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.077747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.077793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.077953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.077999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.078191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.078238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.078450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.078517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.078773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.078840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.079064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.079110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.079256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.079313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.079456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.079504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.079736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.079802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.079991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.080036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.080222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.080268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.080452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.080524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.080735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.080781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.080935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.080982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.081113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.081159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.081342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.081391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.081559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.081629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.081815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.081862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.082024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.082069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.082225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.720 [2024-11-19 10:56:08.082273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.720 qpair failed and we were unable to recover it. 00:28:20.720 [2024-11-19 10:56:08.082419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.082466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.082649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.082695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.082872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.082918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.083050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.083097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.083287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.083345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.083515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.083585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.083751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.083797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.083965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.084011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.084164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.084218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.084410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.084458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.084651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.084697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.084858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.084904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.085078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.085125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.085326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.085372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.085550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.085615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.085763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.085812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.086000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.086047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.086267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.086325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.086584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.086655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.086857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.086925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.087114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.087160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.087336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.087382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.087632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.087696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.087898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.087964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.088144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.088192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.088403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.088473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.088685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.088751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.088964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.089028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.089182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.089228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.089485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.089551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.089759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.089824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.089984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.090030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.090219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.090265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.090530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.090598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.090808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.090875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.091048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.091095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.091259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.091320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.091500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.091547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.091738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.091783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.091944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.091991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.092146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.721 [2024-11-19 10:56:08.092193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.721 qpair failed and we were unable to recover it. 00:28:20.721 [2024-11-19 10:56:08.092362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.092409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.092617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.092664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.092837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.092884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.093046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.093092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.093222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.093268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.093452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.093499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.093640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.093687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.093866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.093920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.094077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.094124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.094274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.094334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.094484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.094530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.094690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.094735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.094896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.094942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.095120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.095166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.095360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.095407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.095558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.095604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.095788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.095836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.096024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.096070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.096244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.096290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.096481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.096528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.096718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.096763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.096926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.096973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.097147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.097195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.097365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.097413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.097669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.097735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.097912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.097957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.098148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.098194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.098418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.098484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.098677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.098744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.098925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.098971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.099139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.099184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.099337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.099385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.099606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.099672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.099813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.099858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.100043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.100089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.100238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.100284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.100466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.722 [2024-11-19 10:56:08.100533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.722 qpair failed and we were unable to recover it. 00:28:20.722 [2024-11-19 10:56:08.100751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.100797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.101024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.101070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.101234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.101280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.101488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.101560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.101763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.101809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.101988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.102034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.102199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.102245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.102439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.102486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.102647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.102693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.102839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.102884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.103092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.103146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.103296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.103361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.103577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.103646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.103853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.103921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.104103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.104148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.104352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.104400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.104581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.104651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.104821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.104886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.105041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.105086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.105256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.105314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.105494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.105539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.105727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.105773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.105949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.105995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.106136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.106182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.106341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.106390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.106572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.106618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.106773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.106819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.106949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.106995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.107150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.107196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.107404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.107452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.107603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.107650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.107805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.107851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.108031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.108078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.108289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.108345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.108494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.108540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.108711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.108758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.108970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.109034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.109216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.109262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.109438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.109486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.109672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.109718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.109929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.109974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.723 qpair failed and we were unable to recover it. 00:28:20.723 [2024-11-19 10:56:08.110156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.723 [2024-11-19 10:56:08.110202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.110396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.110473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.110631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.110699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.110913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.110978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.111160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.111206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.111350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.111399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.111580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.111647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.111846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.111913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.112126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.112172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.112341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.112402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.112646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.112711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.112967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.113044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.113198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.113244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.113426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.113521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.113726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.113791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.113979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.114025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.114214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.114260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.114482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.114548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.114696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.114742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.114898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.114945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.115134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.115180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.115420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.115486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.115745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.115810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.115996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.116042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.116226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.116273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.116489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.116562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.116751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.116816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.116965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.117011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.117162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.117208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.117420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.117489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.117691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.117757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.117972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.118017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.118158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.118205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.118367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.118437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.118618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.118687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.118838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.118885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.119046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.119091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.119247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.119295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.119496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.119543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.119686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.119732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.119895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.119941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.120136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.724 [2024-11-19 10:56:08.120182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.724 qpair failed and we were unable to recover it. 00:28:20.724 [2024-11-19 10:56:08.120340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.120388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.120571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.120617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.120814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.120887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.121066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.121112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.121326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.121374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.121593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.121657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.121831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.121903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.122075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.122129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.122270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.122325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.122512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.122579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.122745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.122809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.122993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.123039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.123254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.123300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.123496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.123543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.123721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.123766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.123921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.123967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.124111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.124157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.124341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.124388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.124611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.124683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.124866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.124913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.125139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.125185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.125398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.125464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.125677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.125742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.125952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.125998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.126145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.126191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.126365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.126435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.126599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.126667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.126880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.126927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.127086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.127132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.127322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.127376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.127545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.127615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.127825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.127871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.128022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.128068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.128200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.128247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.128464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.128511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.128702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.128748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.128946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.128992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.129208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.129254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.129525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.129591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.129848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.129914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.130066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.725 [2024-11-19 10:56:08.130112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.725 qpair failed and we were unable to recover it. 00:28:20.725 [2024-11-19 10:56:08.130268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.130323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.130503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.130574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.130812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.130877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.131057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.131103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.131256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.131310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.131487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.131561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.131711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.131765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.131979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.132025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.132212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.132259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.132462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.132508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.132679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.132725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.132880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.132927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.133105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.133150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.133342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.133393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.133631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.133696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.133861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.133936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.134150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.134196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.134373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.134447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.134643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.134709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.134870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.134937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.135132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.135178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.135369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.135440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.135696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.135763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.135975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.136020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.136200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.136246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.136515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.136582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.136778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.136842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.136994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.137039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.137197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.137244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.137472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.137538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.137743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.137805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.138005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.138050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.138241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.138286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.138534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.138581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.138735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.138781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.139033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.139103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.726 [2024-11-19 10:56:08.139324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.726 [2024-11-19 10:56:08.139372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.726 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.139569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.139634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.139884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.139950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.140177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.140223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.140394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.140459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.140601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.140649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.140855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.140922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.141141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.141186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.141409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.141473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.141734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.141801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.142018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.142089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.142298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.142356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.142587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.142655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.142835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.142899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.143076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.143123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.143377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.143442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.143623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.143690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.143844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.143916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.144133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.144178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.144381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.144428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.144579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.144625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.144810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.144856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.145036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.145084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.145233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.145279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.145479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.145527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.145749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.145796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.145982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.146027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.146201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.146249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.146452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.146499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.146698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.146744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.146910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.146956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.147138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.147185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.147327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.147374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.147525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.147598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.147787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.147854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.147994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.148040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.148227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.148273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.148508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.148574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.148748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.148817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.148994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.149040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.149193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.149239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.149438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.727 [2024-11-19 10:56:08.149484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.727 qpair failed and we were unable to recover it. 00:28:20.727 [2024-11-19 10:56:08.149667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.149714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.149901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.149947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.150130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.150176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.150339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.150386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.150614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.150681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.150892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.150966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.151151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.151199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.151377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.151449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.151631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.151702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.151894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.151940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.152127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.152172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.152317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.152364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.152534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.152605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.152737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.152782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.152960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.153005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.153177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.153222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.153399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.153446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.153615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.153661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.153828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.153874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.154048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.154094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.154239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.154285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.154460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.154507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.154662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.154709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.154880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.154926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.155091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.155137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.155325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.155373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.155551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.155599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.155782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.155829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.155997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.156043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.156199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.156245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.156439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.156487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.156623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.156668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.156842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.156888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.157027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.157073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.157233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.157279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.157445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.157493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.157676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.157722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.157859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.157905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.158077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.158123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.158318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.158365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.158511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.158558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.728 [2024-11-19 10:56:08.158706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.728 [2024-11-19 10:56:08.158754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.728 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.158947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.158993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.159134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.159180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.159328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.159378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.159542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.159588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.159791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.159859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.160041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.160089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.160300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.160363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.160631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.160698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.160887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.160954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.161110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.161156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.161337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.161384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.161528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.161577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.161796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.161862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.162046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.162094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.162248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.162294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.162494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.162560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.162821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.162885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.163068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.163116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.163282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.163378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.163629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.163698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.163934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.163980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.164155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.164201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.164384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.164457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.164661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.164728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.164946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.165009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.165241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.165287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.165532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.165599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.165807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.165873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.166072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.166118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.166348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.166398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.166566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.166633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.166812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.166884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.167039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.167086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.167284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.167341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.167551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.167618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.167794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.167863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.168039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.168084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.168272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.168327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.168552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.168598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.168744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.168789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.168973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.169018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.729 qpair failed and we were unable to recover it. 00:28:20.729 [2024-11-19 10:56:08.169197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.729 [2024-11-19 10:56:08.169243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.169431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.169478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.169630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.169676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.169821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.169867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.170061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.170107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.170295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.170376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.170554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.170600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.170753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.170798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.170985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.171031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.171182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.171227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.171414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.171462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.171628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.171697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.171867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.171912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.172138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.172183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.172360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.172407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.172559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.172605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.172824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.172870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.173020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.173066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.173258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.173314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.173494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.173561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.173739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.173785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.173939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.173988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.174172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.174219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.174403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.174476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.174650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.174696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.174861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.174907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.175045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.175090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.175275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.175333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.175520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.175567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.175759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.175804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.176004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.176049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.176242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.176289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.176464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.176512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.176676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.176723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.176880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.176927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.177063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.177109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.177271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.177327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.730 qpair failed and we were unable to recover it. 00:28:20.730 [2024-11-19 10:56:08.177527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.730 [2024-11-19 10:56:08.177573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.177746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.177793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.177976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.178024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.178176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.178221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.178414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.178461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.178611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.178657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.178842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.178888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.179112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.179158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.179366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.179413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.179608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.179654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.179825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.179871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.180067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.180114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.180312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.180359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.180532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.180600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.180774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.180821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.181016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.181064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.181246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.181293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.181466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.181513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.181679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.181725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.181936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.181982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.182163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.182210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.182393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.182463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.182628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.182704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.182872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.182943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.183104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.183149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.183322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.183369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.183549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.183595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.183728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.183774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.183966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.184012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.184203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.184249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.184421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.184469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.184646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.184692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.184878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.184924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.185096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.185142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.185311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.185360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.185547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.185617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.185783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.185816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.185924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.185956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.186054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.186088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.186209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.186242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.186345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.186378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:56:08.186519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.731 [2024-11-19 10:56:08.186552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.186759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.186826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.186971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.187018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.187209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.187262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.187376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.187409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.187563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.187636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.187782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.187830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.188048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.188099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.188276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.188346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.188531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.188579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.188789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.188835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.189047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.189093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.189277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.189335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.189555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.189623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.189847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.189913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.190121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.190167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.190371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.190442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.190658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.190729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.190945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.191019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.191225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.191271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.191470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.191533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.191774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.191840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.192039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.192084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.192244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.192290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.192498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.192562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.192780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.192848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.193025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.193091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.193283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.193340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.193575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.193644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.193861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.193928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.194119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.194165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.194318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.194367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.194581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.194647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.194867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.194930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.195110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.195164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.195338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.195385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.195596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.195662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.195828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.195898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.196083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.196129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.196322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.196371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.196545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.196592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:56:08.196753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.732 [2024-11-19 10:56:08.196799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.196973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.197019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.197165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.197211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.197377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.197450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.197701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.197766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.197903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.197949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.198094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.198141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.198340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.198388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.198604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.198650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.198792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.198839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.199002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.199048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.199229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.199275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.199495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.199541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.199730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.199797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.199955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.200001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.200197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.200243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.200442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.200488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.200664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.200709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.200859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.200905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.201052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.201097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.201269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.201324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.201546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.201592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.201805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.201851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.202065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.202110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.202273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.202329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.202547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.202614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.202861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.202926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.203101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.203146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.203346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.203394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.203566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.203638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.203820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.203886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.204033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.204078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.204272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.204332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.204561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.204635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.204821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.204887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.205071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.205117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.205367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.205414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.205631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.205677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.205894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.205961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.206139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.206187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.206391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.206458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:56:08.206629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.733 [2024-11-19 10:56:08.206695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.206866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.206912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.207055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.207101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.207246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.207292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.207467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.207513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.207695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.207742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.207887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.207934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.208119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.208165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.208336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.208384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.208566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.208611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.208752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.208798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.208951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.208998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.209179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.209226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.209393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.209440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.209654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.209700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.209856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.209903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.210063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.210109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.210260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.210317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.210498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.210545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.210747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.210793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.210976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.211022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.211172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.211217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.211363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.211409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.211554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.211600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.211792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.211838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.212023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.212069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.212223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.212271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.212449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.212496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.212680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.212728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.212950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.213000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.213151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.213197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.213355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.213402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.213554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.213619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.213810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.213856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.214005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.214056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.214192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.214240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.214439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.214488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.734 qpair failed and we were unable to recover it. 00:28:20.734 [2024-11-19 10:56:08.214650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.734 [2024-11-19 10:56:08.214697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.214874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.214920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.215109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.215156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.215293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.215358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.215540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.215587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.215795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.215841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.215994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.216040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.216252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.216297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.216495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.216542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.216772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.216819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.216975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.217021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.217207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.217253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.217434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.217481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.217631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.217676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.217829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.217875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.218019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.218067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.218254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.218300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.218495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.218542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.218728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.218778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.218924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.218971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.219149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.219195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.219410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.219478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.219669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.219738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.219889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.219934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.220081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.220129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.220365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.220433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.220650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.220717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.220899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.220946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.221098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.221146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.221343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.221391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.221573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.221640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.221829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.221874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.222059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.222105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.222272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.222333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.222498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.222545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.222727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.222781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.222938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.222985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.223168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.223214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.223352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.223401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.223628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.223698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.223907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.223974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.224131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.224179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.224380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.224449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.224616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.224682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.224847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.224893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.225122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.225168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.225396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.225443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.225660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.225706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.225860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.225905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.226080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.226127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.226292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.226350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.226511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.226557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.226718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.226764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.226922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.735 [2024-11-19 10:56:08.226967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.735 qpair failed and we were unable to recover it. 00:28:20.735 [2024-11-19 10:56:08.227146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.227192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.227405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.227452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.227603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.227649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.227824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.227870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.228051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.228098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.228288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.228344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.228530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.228576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.228741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.228812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.229010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.229056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.229268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.229325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.229511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.229581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.229725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.229770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.229967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.230034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.230221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.230267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.230484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.230550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.230784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.230849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.231031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.231077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.231233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.231280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.231467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.231537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.231690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.231739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.231935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.232008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.232171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.232225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.232412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.232460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.232656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.232703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.232912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.232958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.233103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.233151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.233343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.233391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.233582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.233648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.233875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.233937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.234114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.234160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.234393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.234461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.234588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.234635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.234807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.234853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.235009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.235054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.235247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.235293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.235502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.235549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.235735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.235782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.235925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.235970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.236167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.236213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.236403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.236468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.236712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.236776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.236927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.236976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.237171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.237219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.237445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.237515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.237667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.237715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.237925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.237988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.238199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.238245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.238475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.238541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.238791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.238857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.238996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.239043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.239208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.239253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.239435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.239500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.239735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.239800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.239985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.240031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.240194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.240241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.240458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.240523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.240762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.736 [2024-11-19 10:56:08.240809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.736 qpair failed and we were unable to recover it. 00:28:20.736 [2024-11-19 10:56:08.240994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.241040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.241192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.241237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.241427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.241492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.241616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.241662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.241890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.241970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.242146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.242192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.242385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.242454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.242653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.242717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.242896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.242944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.243108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.243154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.243349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.243396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.243541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.243590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.243806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.243852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.244013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.244059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.244270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.244326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.244463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.244510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.244722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.244786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.244973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.245020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.245206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.245252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.245449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.245523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.245747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.245794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.245977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.246022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.246203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.246250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.246421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.246489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.246646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.246709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.246860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.246906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.247059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.247105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.247260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.247316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.247478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.247524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.247665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.247710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.247859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.247905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.248093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.248140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.248324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.248373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.248533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.248578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.248753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.248802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.248965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.249011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.249170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.249218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.249404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.249452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.249666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.249713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.249859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.249906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.250128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.250174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.250330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.250378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.250563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.250636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.250835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.250899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.251077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.251131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.251290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.251348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.251555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.251623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.251839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.251906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.252119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.252164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.252317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.252366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.252515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.252562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.252783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.252828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.737 qpair failed and we were unable to recover it. 00:28:20.737 [2024-11-19 10:56:08.252978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.737 [2024-11-19 10:56:08.253023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.253205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.253251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.253428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.253498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.253675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.253743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.253932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.253978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.254130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.254175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.254342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.254389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.254529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.254577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.254712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.254759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.254917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.254963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.255117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.255165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.255346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.255394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.255555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.255601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.255748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.255796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.255966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.256013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.256194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.256240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.256437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.256485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.256659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.256706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.256855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.256901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.257099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.257145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.257310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.257368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.257556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.257602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.257790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.257835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.258019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.258065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.258201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.258247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.258400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.258447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.258715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.258781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.258999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.259046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.259190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.259238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.259439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.259507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.259715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.259781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.259968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.260014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.260191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.260247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.260502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.260569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.260781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.260845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.260985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.261031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.261222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.261268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.261467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.261512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.261694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.261771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.261969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.262015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.262194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.262239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.262434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.262503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.262681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.262726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.262906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.262953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.263098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.263143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.263327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.263374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.263652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.263718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.263903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.263949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.264116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.264162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.264402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.264469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.264692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.264758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.264969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.265015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.265174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.265219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.265373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.265419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.265600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.265648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.265832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.265878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.266059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.266105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.738 [2024-11-19 10:56:08.266273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.738 [2024-11-19 10:56:08.266329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.738 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.266516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.266564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.266712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.266760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.266947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.266993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.267177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.267223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.267408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.267475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.267697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.267763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.267927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.267997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.268176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.268222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.268466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.268532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.268762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.268827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.268971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.269017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.269222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.269268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.269442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.269515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.269738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.269803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.269979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.270032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.270244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.270290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.270489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.270554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.270816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.270880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.271057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.271102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.271252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.271298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.271477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.271523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.271671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.271719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.271874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.271920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.272061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.272107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.272284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.272340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.272509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.272555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.272740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.272786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.272939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.272985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.273171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.273217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.273356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.273403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.273541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.273588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.273799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.273864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.274054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.274099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.274248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.274294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.274499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.274563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.274769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.274835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.275022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.275070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.275260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.275316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.275511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.275578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.275723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.275769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.275915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.275961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.276177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.276225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.276414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.276460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.276651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.276697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.276878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.276925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.277082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.277129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.277317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.277364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.277542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.277588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.277751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.277796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.278013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.278060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.278236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.278282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.278449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.278495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.278670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.278716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.278932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.278977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.279166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.279219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.279374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.739 [2024-11-19 10:56:08.279421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.739 qpair failed and we were unable to recover it. 00:28:20.739 [2024-11-19 10:56:08.279640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.279705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.279912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.279977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.280142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.280190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.280354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.280402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.280578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.280649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.280841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.280906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.281092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.281140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.281347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.281395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.281602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.281650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.281832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.281879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.282033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.282079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.282248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.282294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.282522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.282588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.282850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.282917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.283095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.283144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.283292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.283348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.283515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.283561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.283734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.283781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.283951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.283996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.284131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.284176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.284362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.284436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.284662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.284707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.284916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.284962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.285143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.285188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.285451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.285517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.285709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.285772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.285961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.286007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.286183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.286229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.286405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.286472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.286723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.286791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.287005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.287051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.287228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.287274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.287471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.287518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.287706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.287752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.287987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.288035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.288188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.288235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.288412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.288459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.288626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.288672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.288850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.288903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.289051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.289098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.289246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.289294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.289503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.289568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.289824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.289890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.290033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.290079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.290260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.290320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.290471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.290520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.290733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.290779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.290961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.291008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.291187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.291233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.291463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.291511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.291753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.291819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.292007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.292053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.740 qpair failed and we were unable to recover it. 00:28:20.740 [2024-11-19 10:56:08.292245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.740 [2024-11-19 10:56:08.292294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.292496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.292565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.292733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.292829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.292982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.293028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.293211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.293257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.293496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.293542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.293755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.293801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.293948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.293994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.294179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.294225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.294388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.294434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.294607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.294653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.294809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.294855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.294996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.295043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.295210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.295259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.295430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.295484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.295696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.295765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.295937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.295989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.296179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.296227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.296414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.296461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.296677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.296724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.296909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.296956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.297220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.297279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.297543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.297590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.297733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.297805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.298113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.298204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.298419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.298466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.298616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.298663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.298929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.298992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.299249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.299324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.299530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.299578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.299801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.299867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.300186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.300245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.300479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.300529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.300742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.300807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.301081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.301146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.301422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.301470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.301647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.301693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.301845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.301905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.302143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.302204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.302398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.302445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.302629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.302690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.302963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.303024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.303319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.303384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.303576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.303622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.303813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.303872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.304129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.304193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.304427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.304476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.304692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.304753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.305043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.305105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.305316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.305382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.305567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.305639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.305889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.305936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.306133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.306194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.306467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.306528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.306717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.306763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.306974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.307037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.307315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.741 [2024-11-19 10:56:08.307363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.741 qpair failed and we were unable to recover it. 00:28:20.741 [2024-11-19 10:56:08.307546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.742 [2024-11-19 10:56:08.307592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.742 qpair failed and we were unable to recover it. 00:28:20.742 [2024-11-19 10:56:08.307755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.742 [2024-11-19 10:56:08.307818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.742 qpair failed and we were unable to recover it. 00:28:20.742 [2024-11-19 10:56:08.308020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.742 [2024-11-19 10:56:08.308090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:20.742 qpair failed and we were unable to recover it. 00:28:21.016 [2024-11-19 10:56:08.308321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.016 [2024-11-19 10:56:08.308368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.016 qpair failed and we were unable to recover it. 00:28:21.016 [2024-11-19 10:56:08.308529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.016 [2024-11-19 10:56:08.308575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.016 qpair failed and we were unable to recover it. 00:28:21.016 [2024-11-19 10:56:08.308821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.016 [2024-11-19 10:56:08.308882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.016 qpair failed and we were unable to recover it. 00:28:21.016 [2024-11-19 10:56:08.309128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.016 [2024-11-19 10:56:08.309188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.016 qpair failed and we were unable to recover it. 00:28:21.016 [2024-11-19 10:56:08.309392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.016 [2024-11-19 10:56:08.309439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.016 qpair failed and we were unable to recover it. 00:28:21.016 [2024-11-19 10:56:08.309677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.016 [2024-11-19 10:56:08.309737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.016 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.309975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.310033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.310268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.310358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.310546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.310627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.310908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.310969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.311207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.311267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.311498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.311574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.311889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.311950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.312225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.312284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.312486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.312533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.312885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.312948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.313219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.313266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.313500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.313547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.313869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.313947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.314191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.314238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.314444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.314492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.314676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.314724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.314889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.314940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.315137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.315198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.315464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.315513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.315658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.315702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.315875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.315920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.316160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.316255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.316519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.316580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.316794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.316854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.317089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.317151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.317362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.317425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.317628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.317690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.317923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.318002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.318220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.318285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.318543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.318602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.017 [2024-11-19 10:56:08.318793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.017 [2024-11-19 10:56:08.318855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.017 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.319093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.319154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.319378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.319440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.319707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.319768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.319961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.320030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.320300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.320394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.320601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.320662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.320897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.320957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.321169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.321228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.321426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.321487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.321699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.321761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.322002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.322062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.322265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.322364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.322640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.322702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.322979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.323041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.323231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.323292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.323550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.323613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.323907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.323967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.324166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.324228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.324486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.324549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.324819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.324879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.325115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.325175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.325403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.325466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.325698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.325758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.326001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.326062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.326294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.326372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.326577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.326638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.326829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.326889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.327121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.327181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.327409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.327475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.327691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.018 [2024-11-19 10:56:08.327757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.018 qpair failed and we were unable to recover it. 00:28:21.018 [2024-11-19 10:56:08.327963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.328028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.328281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.328362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.328590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.328655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.328940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.328999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.329279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.329375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.329665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.329724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.329934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.330009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.330213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.330278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.330519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.330585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.330799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.330864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.331149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.331214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.331449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.331515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.331731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.331799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.332056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.332121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.332370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.332437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.332662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.332727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.332942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.333008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.333293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.333374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.333631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.333696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.333959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.334024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.334287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.334391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.334692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.334757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.335016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.335082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.335274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.335355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.335622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.335688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.335971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.336038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.336296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.336381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.336674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.336740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.336962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.337026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.337350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.337416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.337629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.019 [2024-11-19 10:56:08.337693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.019 qpair failed and we were unable to recover it. 00:28:21.019 [2024-11-19 10:56:08.337932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.337998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.338284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.338365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.338618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.338682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.338970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.339034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.339332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.339398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.339646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.339714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.339944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.340009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.340231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.340297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.340563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.340629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.340889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.340953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.341216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.341281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.341571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.341636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.341877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.341944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.342160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.342225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.342456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.342522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.342782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.342857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.343147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.343212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.343488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.343554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.343837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.343901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.344114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.344180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.344458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.344525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.344818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.344884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.345140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.345205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.345477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.345545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.345794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.345859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.346144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.346208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.346453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.346519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.346727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.346792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.346994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.347063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.347367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.347433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.347690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.020 [2024-11-19 10:56:08.347754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.020 qpair failed and we were unable to recover it. 00:28:21.020 [2024-11-19 10:56:08.347972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.348039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.348256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.348342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.348620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.348686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.348972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.349037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.349242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.349325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.349583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.349648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.349891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.349958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.350215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.350280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.350580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.350645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.350909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.350976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.351197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.351262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.351508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.351576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.351836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.351902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.352152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.352220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.352493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.352559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.352812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.352877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.353064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.353129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.353356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.353422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.353666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.353731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.353970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.354037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.354235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.354317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.354544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.354611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.354813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.354877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.355125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.355189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.355442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.355520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.355819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.355883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.356138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.356204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.356473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.356541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.356788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.356854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.357084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.357150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.357389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.021 [2024-11-19 10:56:08.357457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.021 qpair failed and we were unable to recover it. 00:28:21.021 [2024-11-19 10:56:08.357723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.357788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.358028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.358092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.358357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.358424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.358690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.358756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.358996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.359062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.359350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.359417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.359707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.359772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.360066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.360132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.360392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.360458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.360669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.360734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.360939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.361003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.361265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.361350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.361556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.361622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.361825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.361889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.362115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.362182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.362411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.362479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.362732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.362797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.363044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.363108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.363366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.363432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.363653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.363718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.363943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.364011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.364281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.364362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.364700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.364769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.365025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.365124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.365463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.365532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.022 [2024-11-19 10:56:08.365823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.022 [2024-11-19 10:56:08.365889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.022 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.366097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.366161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.366379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.366446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.366691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.366760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.367042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.367106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.367353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.367419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.367645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.367710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.367963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.368028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.368255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.368352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.368569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.368635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.368881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.368944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.369206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.369275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.369545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.369611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.369866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.369931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.370223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.370288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.370563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.370629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.370850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.370914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.371124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.371188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.371444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.371510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.371805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.371869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.372103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.372167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.372458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.372524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.372827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.372892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.373112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.373176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.373434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.373500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.373757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.373825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.374073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.374137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.374371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.374438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.374685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.374750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.375033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.375098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.375299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.375383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.023 [2024-11-19 10:56:08.375649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.023 [2024-11-19 10:56:08.375716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.023 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.375971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.376035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.376337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.376404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.376658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.376722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.376978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.377042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.377298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.377383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.377594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.377661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.377908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.377974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.378239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.378323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.378618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.378683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.378883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.378951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.379213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.379279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.379586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.379651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.379893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.379960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.380201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.380266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.380539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.380606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.380857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.380922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.381177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.381253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.381532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.381596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.381847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.381914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.382158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.382224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.382473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.382539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.382830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.382895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.383186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.383251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.383499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.383564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.383812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.383876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.384155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.384221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.384466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.384533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.384770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.384834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.385082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.385145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.385387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.385454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.024 [2024-11-19 10:56:08.385713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.024 [2024-11-19 10:56:08.385778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.024 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.385992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.386056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.386255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.386334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.386546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.386612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.386850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.386915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.387158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.387224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.387494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.387560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.387845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.387909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.388156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.388223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.388525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.388591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.388880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.388945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.389243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.389329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.389600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.389664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.389899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.389965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.390182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.390247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.390531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.390598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.390902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.390968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.391208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.391273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.391597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.391663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.391914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.391979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.392170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.392236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.392540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.392606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.392892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.392958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.393223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.393289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.393576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.393640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.393897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.393962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.394224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.394301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.394576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.394641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.394862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.394926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.025 qpair failed and we were unable to recover it. 00:28:21.025 [2024-11-19 10:56:08.395152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.025 [2024-11-19 10:56:08.395217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.395461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.395527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.395748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.395812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.396105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.396169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.396466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.396532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.396789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.396854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.397039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.397103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.397386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.397452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.397693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.397757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.397973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.398036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.398241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.398319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.398575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.398640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.398920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.398985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.399242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.399320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.399555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.399621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.399866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.399930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.400132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.400197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.400507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.400575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.400847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.400911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.401132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.401199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.401516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.401581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.401868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.401933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.402233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.402298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.402595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.402660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.402921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.402986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.403181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.403246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.403523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.403589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.403836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.403901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.404189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.026 [2024-11-19 10:56:08.404254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.026 qpair failed and we were unable to recover it. 00:28:21.026 [2024-11-19 10:56:08.404535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.404600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.404889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.404956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.405249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.405331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.405643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.405707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.405929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.405994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.406243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.406343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.406644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.406708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.406976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.407041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.407287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.407384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.407631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.407696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.407987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.408052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.408358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.408426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.408672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.408737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.408937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.409002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.409231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.409295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.409601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.409666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.409912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.409978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.410202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.410266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.410507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.410571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.410765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.410829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.411073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.411137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.411414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.411480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.411781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.411846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.412137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.412201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.412505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.412572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.412792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.412861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.413060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.413125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.413374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.413440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.413643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.413709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.027 [2024-11-19 10:56:08.413999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.027 [2024-11-19 10:56:08.414063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.027 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.414355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.414421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.414776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.414841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.415139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.415204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.415481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.415547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.415839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.415904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.416196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.416261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.416538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.416602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.416847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.416911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.417161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.417226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.417545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.417611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.417841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.417905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.418194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.418258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.418561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.418625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.418861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.418925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.419182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.419247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.419535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.419599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.419849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.419914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.420160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.420225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.420489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.420565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.420806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.420871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.421089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.421155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.421400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.421465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.421723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.421790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.422047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.422113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.422371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.028 [2024-11-19 10:56:08.422436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.028 qpair failed and we were unable to recover it. 00:28:21.028 [2024-11-19 10:56:08.422731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.422795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.423039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.423102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.423384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.423450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.423749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.423814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.424074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.424140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.424425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.424491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.424686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.424750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.425055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.425120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.425337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.425406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.425660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.425727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.425949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.426015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.426323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.426391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.426682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.426747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.427026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.427090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.427384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.427452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.427740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.427805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.428066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.428131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.428415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.428483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.428744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.428808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.429053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.429119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.429421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.429489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.429784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.429849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.430067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.430135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.430402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.430468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.430731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.430796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.431033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.431097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.431356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.431423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.431712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.431776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.432020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.432086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.432382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.029 [2024-11-19 10:56:08.432447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.029 qpair failed and we were unable to recover it. 00:28:21.029 [2024-11-19 10:56:08.432727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.432791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.433080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.433146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.433379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.433446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.433654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.433737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.433965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.434031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.434199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.434263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.434542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.434608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.434914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.434979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.435268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.435353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.435652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.435717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.435999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.436065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.436363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.436429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.436650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.436716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.436977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.437043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.437319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.437385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.437592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.437657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.437869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.437936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.438240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.438336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.438610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.438675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.438923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.438988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.439234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.439299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.439575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.439640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.439901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.439965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.440233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.440298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.440624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.440689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.440972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.441036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.441299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.441383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.441655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.441720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.441939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.030 [2024-11-19 10:56:08.442003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.030 qpair failed and we were unable to recover it. 00:28:21.030 [2024-11-19 10:56:08.442288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.442382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.442638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.442703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.442902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.442967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.443218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.443284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.443600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.443665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.443952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.444018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.444322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.444390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.444668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.444734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.445025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.445090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.445346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.445413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.445677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.445743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.446048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.446113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.446403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.446470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.446681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.446747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.447037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.447114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.447403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.447471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.447728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.447793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.448074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.448139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.448377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.448443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.448744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.448809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.449051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.449119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.449371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.449440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.449742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.449806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.450099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.450165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.450464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.450531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.450780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.450843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.451088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.451152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.451373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.451443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.451690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.451756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.452018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.031 [2024-11-19 10:56:08.452082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.031 qpair failed and we were unable to recover it. 00:28:21.031 [2024-11-19 10:56:08.452331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.452397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.452705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.452770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.453077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.453142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.453440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.453505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.453802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.453867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.454154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.454220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.454514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.454579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.454823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.454889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.455150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.455214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.455521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.455587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.455782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.455848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.456155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.456219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.456536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.456603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.456900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.456965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.457270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.457354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.457593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.457657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.457919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.457983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.458202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.458270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.458591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.458656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.458900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.458966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.459221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.459286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.459567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.459632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.459889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.459953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.460249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.460331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.460617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.460682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.460951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.461016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.461238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.461321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.461615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.461680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.461942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.462008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.462293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.032 [2024-11-19 10:56:08.462389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.032 qpair failed and we were unable to recover it. 00:28:21.032 [2024-11-19 10:56:08.462644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.462710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.462964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.463029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.463276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.463366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.463667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.463734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.463994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.464062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.464330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.464397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.464662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.464726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.464981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.465045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.465344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.465412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.465678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.465743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.466040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.466105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.466367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.466434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.466728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.466792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.467052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.467120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.467388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.467455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.467748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.467813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.468033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.468098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.468358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.468423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.468607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.468671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.468971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.469036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.469282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.469368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.469625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.469704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.469993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.470059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.470359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.470427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.470719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.470785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.471037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.471104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.471353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.471422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.033 [2024-11-19 10:56:08.471654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.033 [2024-11-19 10:56:08.471718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.033 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.471978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.472043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.472317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.472385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.472670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.472734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.473029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.473093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.473391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.473457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.473660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.473725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.474005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.474069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.474343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.474411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.474655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.474720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.474971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.475035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.475285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.475366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.475568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.475633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.475888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.475952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.476197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.476263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.476568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.476635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.476887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.476951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.477238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.477316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.477586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.477651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.477900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.477968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.478263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.478359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.478645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.478710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.479048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.479114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.479405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.479472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.479726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.479791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.480020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.480084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.480380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.480446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.480749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.480814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.481117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.481182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.481480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.481548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.481846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.481910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.034 [2024-11-19 10:56:08.482169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.034 [2024-11-19 10:56:08.482234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.034 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.482502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.482568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.482817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.482881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.483132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.483208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.483505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.483572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.483821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.483884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.484107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.484173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.484421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.484486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.484754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.484820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.485071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.485137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.485395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.485461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.485740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.485805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.486082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.486148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.486392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.486459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.486755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.486819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.487101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.487167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.487368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.487435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.487649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.487715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.487946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.488011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.488249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.488333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.488627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.488691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.488970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.489035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.489283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.489366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.489667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.489731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.489947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.490014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.490263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.490353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.490612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.490678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.490956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.491021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.491236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.491329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.491624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.491689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.491989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.492053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.492323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.035 [2024-11-19 10:56:08.492409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.035 qpair failed and we were unable to recover it. 00:28:21.035 [2024-11-19 10:56:08.492722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.492787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.492987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.493053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.493277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.493361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.493661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.493725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.493987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.494052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.494364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.494431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.494690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.494754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.495061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.495126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.495373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.495440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.495691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.495758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.496017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.496082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.496329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.496407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.496707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.496771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.497031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.497096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.497347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.497414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.497673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.497737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.497990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.498056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.498353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.498420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.498712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.498777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.499063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.499127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.499370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.499438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.499656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.499721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.499980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.500047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.500276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.500357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.500616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.500681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.500984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.501050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.501315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.501382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.501635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.501701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.501931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.501996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.502242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.502339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.036 qpair failed and we were unable to recover it. 00:28:21.036 [2024-11-19 10:56:08.502600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.036 [2024-11-19 10:56:08.502665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.502907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.502973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.503181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.503248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.503565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.503632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.503879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.503944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.504173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.504238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.504549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.504616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.504854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.504919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.505192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.505257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.505575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.505641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.505893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.505956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.506172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.506238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.506510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.506578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.506862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.506927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.507164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.507229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.507500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.507565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.507846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.507911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.508170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.508235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.508542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.508608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.508901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.508965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.509205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.509269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.509578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.509655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.509894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.509959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.510184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.510249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.510567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.510633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.510894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.510962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.511186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.511253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.511538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.511603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.511864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.511929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.512217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.512282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.512589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.512654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.037 [2024-11-19 10:56:08.512948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.037 [2024-11-19 10:56:08.513013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.037 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.513295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.513380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.513639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.513705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.513990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.514055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.514368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.514435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.514650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.514714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.514981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.515045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.515276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.515359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.515653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.515717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.515997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.516062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.516358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.516424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.516659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.516722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.516927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.516997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.517296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.517376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.517677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.517741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.518039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.518105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.518406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.518473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.518739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.518804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.519069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.519133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.519357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.519426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.519679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.519743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.519995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.520060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.520332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.520398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.520615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.520681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.520976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.521041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.521244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.521327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.521597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.521662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.521961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.522029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.522272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.522374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.522686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.522751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.523044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.523119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.523357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.038 [2024-11-19 10:56:08.523424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.038 qpair failed and we were unable to recover it. 00:28:21.038 [2024-11-19 10:56:08.523715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.523779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.524032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.524100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.524295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.524375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.524658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.524723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.525008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.525073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.525341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.525407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.525661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.525726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.525931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.525997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.526288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.526379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.526644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.526710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.526961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.527026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.527235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.527324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.527589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.527656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.527908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.527971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.528230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.528295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.528616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.528682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.528901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.528968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.529255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.529339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.529585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.529651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.529937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.530000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.530242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.530342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.530596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.530662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.530862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.530925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.531171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.531235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.531501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.531568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.531836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.531903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.532132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.532197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.039 [2024-11-19 10:56:08.532506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.039 [2024-11-19 10:56:08.532578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.039 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.532845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.532911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.533174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.533238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.533556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.533623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.533909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.533974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.534267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.534361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.534637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.534702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.534994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.535059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.535327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.535393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.535652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.535718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.535974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.536038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.536337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.536415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.536714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.536779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.537067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.537131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.537388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.537455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.537747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.537813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.538070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.538134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.538376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.538442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.538651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.538718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.538961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.539024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.539277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.539357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.539653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.539718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.539968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.540034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.540287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.540370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.540637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.540703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.540965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.541030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.541336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.541402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.541665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.541730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.541989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.542054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.542288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.542383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.542646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.542710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.040 [2024-11-19 10:56:08.542956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.040 [2024-11-19 10:56:08.543021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.040 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.543326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.543391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.543650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.543715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.544004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.544069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.544350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.544417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.544700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.544765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.545063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.545128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.545394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.545460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.545711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.545776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.546069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.546134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.546359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.546424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.546676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.546740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.547001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.547069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.547372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.547439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.547686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.547753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.547977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.548040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.548335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.548402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.548689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.548755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.548965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.549029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.549329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.549395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.549662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.549738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.550027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.550091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.550377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.550444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.550691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.550756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.551036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.551101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.551332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.551401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.551672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.551737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.551961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.552026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.552277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.552357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.552591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.552655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.552940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.553004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.553297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.041 [2024-11-19 10:56:08.553377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.041 qpair failed and we were unable to recover it. 00:28:21.041 [2024-11-19 10:56:08.553673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.553736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.553999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.554065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.554338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.554405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.554660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.554726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.555009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.555073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.555335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.555401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.555672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.555737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.555981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.556046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.556337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.556404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.556666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.556741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.556990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.557059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1457114 Killed "${NVMF_APP[@]}" "$@" 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.557330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.557397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.557626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.557691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:21.042 [2024-11-19 10:56:08.557991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.558057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:21.042 [2024-11-19 10:56:08.558356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.558426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:21.042 [2024-11-19 10:56:08.558690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.558755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.042 [2024-11-19 10:56:08.559002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.559068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.559252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.559352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.559649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.559714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.559931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.559996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.560229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.560296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.560613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.560677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.560984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.561048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.561300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.561386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.561607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.561676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.561895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.561961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.562227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.562293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.562616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.562681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.042 [2024-11-19 10:56:08.562932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.042 [2024-11-19 10:56:08.562997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.042 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.563202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.563269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.563568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.563636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.563921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.563987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.564266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.564363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1457670 00:28:21.043 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:21.043 [2024-11-19 10:56:08.564634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.564701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1457670 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1457670 ']' 00:28:21.043 [2024-11-19 10:56:08.564961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.565026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.043 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:21.043 [2024-11-19 10:56:08.565328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.565400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.043 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:21.043 [2024-11-19 10:56:08.565731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.043 [2024-11-19 10:56:08.565798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.565999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.566061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.566293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.566377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.566631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.566697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.566986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.567062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.567335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.567406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.567673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.567740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.567951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.568017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.568254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.568341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.568657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.568722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.569014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.569080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.569377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.569444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.569657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.569722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.569947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.570012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.570262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.570341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.570563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.570627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.570827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.570892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.571110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.571176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.571470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.571537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.571790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.571857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.572124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.043 [2024-11-19 10:56:08.572188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.043 qpair failed and we were unable to recover it. 00:28:21.043 [2024-11-19 10:56:08.572441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.572510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.572717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.572782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.573045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.573110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.573410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.573477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.573747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.573812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.574051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.574115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.574334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.574400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.574639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.574706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.574986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.575051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.575319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.575387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.575681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.575746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.576025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.576090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.576350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.576418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.576716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.576781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.577041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.577107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.577406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.577473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.577740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.577806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.578071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.578147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.578407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.578476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.578770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.578836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.579084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.579149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.579389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.579456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.579683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.579750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.580013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.580078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.580351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.580417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.580634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.580699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.580964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.581029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.581283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.581370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.581633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.581698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.581943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.582008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.044 qpair failed and we were unable to recover it. 00:28:21.044 [2024-11-19 10:56:08.582256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.044 [2024-11-19 10:56:08.582338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.582560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.582628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.582916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.582980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.583263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.583345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.583635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.583700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.583947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.584011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.584269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.584369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.584672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.584738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.585010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.585074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.585341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.585408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.585651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.585717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.585970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.586034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.586278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.586360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.586616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.586685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.586949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.587016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.587298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.587377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.587587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.587655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.587909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.587973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.588228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.588295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.588617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.588683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.588971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.589035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.589338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.589405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.589707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.589772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.589992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.590057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.590324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.590392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.590657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.590721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.591011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.591076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.591331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.591410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.591665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.591731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.592023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.592088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.592317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.592386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.592686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.592750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.593005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.593070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.593325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.045 [2024-11-19 10:56:08.593391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.045 qpair failed and we were unable to recover it. 00:28:21.045 [2024-11-19 10:56:08.593647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.593714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.593976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.594042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.594300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.594386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.594680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.594744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.594994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.595059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.595284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.595376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.595665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.595729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.595967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.596035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.596296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.596398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.596644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.596709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.596990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.597055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.597347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.597414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.597710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.597774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.598018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.598083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.598354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.598419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.598668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.598734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.598948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.599013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.599260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.599341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.599593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.599659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.599945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.600010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.600285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.600363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.600644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.600710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.600964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.601030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.601333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.601398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.601657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.601722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.601970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.602038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.602287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.602373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.602661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.602726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.603011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.603076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.603335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.603402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.603632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.603697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.603987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.604052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.604330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.604397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.046 [2024-11-19 10:56:08.604697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.046 [2024-11-19 10:56:08.604773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.046 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.605060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.605125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.605409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.605476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.605759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.605823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.606040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.606108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.606396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.606462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.606713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.606777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.607067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.607134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.607381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.607448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.607692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.607756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.607972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.608038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.608300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.608381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.608633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.608697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.608981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.609045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.609348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.609416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.609684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.609749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.609970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.610035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.610331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.610397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.610680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.610744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.610947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.611015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.611274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.611352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.611605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.611669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.611959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.612024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.612298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.612392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.612697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.612762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.613010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.613075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.613361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.613428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.613697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.613763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.614055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.614121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.614424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.614490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.614778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.614842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.615129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.615195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.615373] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:28:21.047 [2024-11-19 10:56:08.615467] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.047 [2024-11-19 10:56:08.615509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.615573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.047 qpair failed and we were unable to recover it. 00:28:21.047 [2024-11-19 10:56:08.615820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.047 [2024-11-19 10:56:08.615882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.048 qpair failed and we were unable to recover it. 00:28:21.048 [2024-11-19 10:56:08.616169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.048 [2024-11-19 10:56:08.616232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.048 qpair failed and we were unable to recover it. 00:28:21.048 [2024-11-19 10:56:08.616552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.048 [2024-11-19 10:56:08.616617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.048 qpair failed and we were unable to recover it. 00:28:21.048 [2024-11-19 10:56:08.616872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.048 [2024-11-19 10:56:08.616937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.048 qpair failed and we were unable to recover it. 00:28:21.048 [2024-11-19 10:56:08.617204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.048 [2024-11-19 10:56:08.617273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.048 qpair failed and we were unable to recover it. 00:28:21.048 [2024-11-19 10:56:08.617632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.048 [2024-11-19 10:56:08.617717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.048 qpair failed and we were unable to recover it. 00:28:21.048 [2024-11-19 10:56:08.618009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.048 [2024-11-19 10:56:08.618086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.048 qpair failed and we were unable to recover it. 00:28:21.048 [2024-11-19 10:56:08.618373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.048 [2024-11-19 10:56:08.618441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.048 qpair failed and we were unable to recover it. 00:28:21.048 [2024-11-19 10:56:08.618662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.048 [2024-11-19 10:56:08.618730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.048 qpair failed and we were unable to recover it. 00:28:21.048 [2024-11-19 10:56:08.619013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.048 [2024-11-19 10:56:08.619082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.048 qpair failed and we were unable to recover it. 00:28:21.048 [2024-11-19 10:56:08.619391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.048 [2024-11-19 10:56:08.619460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.048 qpair failed and we were unable to recover it. 00:28:21.048 [2024-11-19 10:56:08.619668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.048 [2024-11-19 10:56:08.619735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.048 qpair failed and we were unable to recover it. 00:28:21.048 [2024-11-19 10:56:08.620049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.048 [2024-11-19 10:56:08.620142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.048 qpair failed and we were unable to recover it. 00:28:21.048 [2024-11-19 10:56:08.620440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.048 [2024-11-19 10:56:08.620508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.048 qpair failed and we were unable to recover it. 00:28:21.048 [2024-11-19 10:56:08.620889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.048 [2024-11-19 10:56:08.620956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.048 qpair failed and we were unable to recover it. 00:28:21.048 [2024-11-19 10:56:08.621213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.048 [2024-11-19 10:56:08.621320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.048 qpair failed and we were unable to recover it. 00:28:21.048 [2024-11-19 10:56:08.621620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.048 [2024-11-19 10:56:08.621686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.048 qpair failed and we were unable to recover it. 00:28:21.048 [2024-11-19 10:56:08.621980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.048 [2024-11-19 10:56:08.622046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.048 qpair failed and we were unable to recover it. 00:28:21.048 [2024-11-19 10:56:08.622291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.048 [2024-11-19 10:56:08.622383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.048 qpair failed and we were unable to recover it. 00:28:21.048 [2024-11-19 10:56:08.622675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.048 [2024-11-19 10:56:08.622742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.048 qpair failed and we were unable to recover it. 00:28:21.048 [2024-11-19 10:56:08.623025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.048 [2024-11-19 10:56:08.623090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.048 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.623349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.623417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.623682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.623748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.623982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.624048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.624340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.624434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.624696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.624762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.624979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.625074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.625362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.625431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.625726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.625793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.626020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.626089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.626384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.626455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.626674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.626738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.626948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.627016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.627337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.627406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.627675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.627741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.628034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.628101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.628386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.628456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.628708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.628775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.629036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.629105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.629368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.629435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.629686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.629751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.629997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.630062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.630292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.630371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.630658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.630722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.630968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.631032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.631331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.631396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.631680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.631762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.632052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.632118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.632412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.632478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.632784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.632848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.633100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.633164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.633452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.633517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.633770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.633835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.634057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.634124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.634377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.634446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.634748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.634813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.635060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.635124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.635414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.635479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.635764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.635829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.636122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.636188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.636455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.636524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.636792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.636857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.637114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.637179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.637441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.637508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.637807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.637871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.638167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.638233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.638493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.638559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.638825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.638890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.639187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.639253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.639555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.639621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.639904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.639969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.640264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.640344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.640641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.640706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.328 qpair failed and we were unable to recover it. 00:28:21.328 [2024-11-19 10:56:08.640996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.328 [2024-11-19 10:56:08.641060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.641272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.641376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.641627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.641695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.641925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.641991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.642276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.642361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.642661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.642726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.642944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.643009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.643204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.643271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.643554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.643622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.643891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.643955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.644245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.644327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.644630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.644695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.644985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.645049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.645343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.645420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.645719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.645785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.646075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.646139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.646444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.646521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.646739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.646804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.647056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.647120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.647359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.647425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.647683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.647748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.648007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.648071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.648330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.648397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.648658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.648722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.648964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.649028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.649279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.649368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.649650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.649716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.649975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.650040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.650301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.650401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.650642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.650706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.650988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.651053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.651320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.651388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.651648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.651712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.651961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.652029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.652283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.652389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.652489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.652515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.652616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.652642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.652739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.652765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.652853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.652879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.652994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.653020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.653139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.653165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.653249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.653275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.653398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.653425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.653541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.653566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.653720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.653746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.653883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.653909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.653995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.654020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.654108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.654134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.654248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.654274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.654405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.654431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.654521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.654549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.654669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.654695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.654813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.654838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.654931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.654963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.655109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.655135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.655257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.655282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.655396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.655435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.655531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.655559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.655666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.329 [2024-11-19 10:56:08.655692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.329 qpair failed and we were unable to recover it. 00:28:21.329 [2024-11-19 10:56:08.655810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.655836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.655944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.655970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.656065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.656092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.656207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.656232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.656334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.656361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.656444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.656470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.656608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.656634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.656727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.656752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.656870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.656896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.656983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.657011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.657102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.657129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.657210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.657236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.657355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.657382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.657480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.657506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.657592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.657618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.657711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.657739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.657857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.657883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.658000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.658026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.658104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.658129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.658210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.658236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.658324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.658351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.658444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.658470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.658560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.658586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.658701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.658727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.658814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.658840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.658978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.659004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.659098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.659123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.659235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.659264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.659359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.659386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.659467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.659493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.659577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.659607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.659688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.659714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.659792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.659818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.659915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.659941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.660052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.660082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.660192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.660218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.660362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.660389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.660472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.660499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.660641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.660666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.660760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.660785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.660867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.660893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.660978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.661004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.661084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.661110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.661190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.661216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.661341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.661368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.661473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.661498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.661582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.661607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.661706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.661731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.661826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.661853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.661935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.661962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.662080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.662106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.662218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.662244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.662329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.662355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.662459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.662485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.662564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.662590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.662675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.662701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.662795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.662821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.330 [2024-11-19 10:56:08.662930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.330 [2024-11-19 10:56:08.662970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.330 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.663065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.663093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.663208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.663235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.663329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.663357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.663459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.663486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.663599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.663625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.663717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.663744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.663859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.663885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.664001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.664027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.664145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.664171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.664313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.664340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.664454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.664480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.664595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.664621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.664733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.664759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.664872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.664898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.664990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.665019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.665109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.665136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.665228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.665259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.665359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.665386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.665475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.665503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.665638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.665664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.665747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.665773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.665857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.665883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.665960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.665985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.666070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.666096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.666235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.666261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.666355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.666381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.666461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.666487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.666594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.666619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.666740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.666768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.666880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.666907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.666995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.667022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.667157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.667183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.667263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.667292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.667416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.667442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.667533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.667559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.667674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.667700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.667785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.667811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.667889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.667915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.667999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.668025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.668141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.668167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.668276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.668308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.668397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.668423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.668505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.668530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.668621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.668647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.668760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.668788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.668899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.668925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.669034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.669060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.669167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.669193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.669273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.669299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.669392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.669420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.331 [2024-11-19 10:56:08.669502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.331 [2024-11-19 10:56:08.669527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.331 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.669626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.669652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.669766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.669792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.669886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.669912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.670020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.670046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.670131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.670157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.670244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.670277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.670374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.670403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.670491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.670518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.670626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.670652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.670737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.670763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.670854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.670880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.670972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.671000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.671114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.671140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.671234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.671268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.671364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.671391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.671526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.671552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.671643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.671669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.671780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.671807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.671927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.671953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.672071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.672106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.672230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.672262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.672358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.672385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.672472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.672500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.672586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.672616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.672728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.672754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.672840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.672866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.672974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.673000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.673083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.673109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.673219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.673245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.673374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.673400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.673482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.673508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.673630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.673656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.673758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.673786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.673899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.673925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.674002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.674028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.674111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.674137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.674224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.674250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.674368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.674395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.674480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.674507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.674621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.674647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.674762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.674788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.674874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.674900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.674992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.675018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.675138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.675165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.675249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.675275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.675367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.675398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.675482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.675509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.675626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.675652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.675733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.675761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.675838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.675865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.675960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.675988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.676079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.676105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.676221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.676247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.676345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.676372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.676452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.676477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.676587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.676613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.676704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.676730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.676811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.676838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.332 [2024-11-19 10:56:08.676927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.332 [2024-11-19 10:56:08.676954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.332 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.677077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.677103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.677180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.677206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.677318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.677345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.677458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.677484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.677590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.677615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.677695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.677721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.677830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.677856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.677968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.677994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.678083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.678109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.678197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.678224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.678337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.678363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.678437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.678463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.678574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.678601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.678685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.678711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.678793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.678819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.678905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.678931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.679045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.679071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.679188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.679214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.679290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.679323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.679418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.679446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.679560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.679586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.679727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.679753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.679826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.679851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.679938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.679964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.680075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.680100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.680186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.680212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.680349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.680380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.680486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.680511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.680596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.680622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.680731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.680757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.680872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.680899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.680990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.681017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.681131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.681157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.681268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.681294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.681396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.681423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.681508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.681534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.681640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.681666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.681781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.681807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.681883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.681909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.682032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.682072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.682193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.682222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.682330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.682358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.682438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.682463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.682568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.682594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.682679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.682705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.682815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.682842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.682928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.682955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.683041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.683067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.683172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.683198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.683317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.683344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.683437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.683463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.683577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.683604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.683692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.683719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.683815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.683841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.683951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.683977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.684084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.684110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.684196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.684222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.684308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.684334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.333 qpair failed and we were unable to recover it. 00:28:21.333 [2024-11-19 10:56:08.684414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.333 [2024-11-19 10:56:08.684440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.684529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.684554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.684665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.684690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.684768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.684794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.684906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.684931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.685014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.685043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.685145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.685185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.685312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.685340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.685425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.685456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.685569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.685595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.685708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.685734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.685841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.685866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.685981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.686008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.686116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.686142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.686227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.686254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.686345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.686371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.686462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.686488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.686597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.686623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.686704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.686730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.686820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.686847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.686939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.686967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.687106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.687134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.687255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.687282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.687375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.687401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.687545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.687571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.687655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.687681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.687785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.687811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.687921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.687947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.688038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.688065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.688169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.688194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.688277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.688311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.688430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.688456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.688593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.688618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.688733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.688759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.688855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.688880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.688968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.688998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.689110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.689137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.689240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.689266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.689384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.689410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.689499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.689525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.689632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.689658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.689748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.689774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.689862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.689888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.689974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.690000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.690140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.690167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.690276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.690309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.690399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.690425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.690540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.690566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.690654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.690684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.690773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.690799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.690889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.690914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.691056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.691081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.691191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.691217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.691336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.334 [2024-11-19 10:56:08.691365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.334 qpair failed and we were unable to recover it. 00:28:21.334 [2024-11-19 10:56:08.691505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.691531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.691615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.691641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.691754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.691780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.691866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.691891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.691997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.692023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.692133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.692160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.692299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.692336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.692418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.692444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.692539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.692565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.692655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.692681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.692766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.692792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.692867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:21.335 [2024-11-19 10:56:08.692896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.692921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.693041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.693067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.693153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.693179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.693256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.693284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.693391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.693430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.693529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.693556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.693681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.693708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.693795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.693822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.693915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.693942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.694027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.694054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.694177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.694204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.694330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.694358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.694483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.694511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.694625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.694653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.694733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.694759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.694841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.694867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.694978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.695003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.695110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.695135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.695252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.695279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.695364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.695392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.695513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.695539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.695650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.695677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.695764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.695791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.695910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.695941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.696055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.696080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.696190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.696216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.696331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.696358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.696470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.696496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.696580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.696606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.696729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.696755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.696842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.696868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.696961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.696987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.697070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.697096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.697203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.697229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.697350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.697379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.697493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.697519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.697619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.697648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.697769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.697794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.697873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.697898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.697980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.698007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.698122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.698148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.698235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.698261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.698382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.698409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.698495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.698520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.698645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.698671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.698765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.698792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.698878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.698903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.335 [2024-11-19 10:56:08.699007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.335 [2024-11-19 10:56:08.699033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.335 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.699146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.699172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.699255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.699281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.699388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.699414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.699506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.699532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.699644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.699670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.699751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.699776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.699860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.699886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.700002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.700031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.700168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.700194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.700334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.700388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.700492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.700521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.700666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.700693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.700815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.700842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.700927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.700954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.701075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.701100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.701215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.701246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.701337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.701364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.701455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.701481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.701615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.701641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.701725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.701750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.701866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.701892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.702026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.702051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.702149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.702175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.702263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.702289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.702410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.702436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.702516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.702542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.702650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.702675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.702804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.702844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.702963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.702991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.703099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.703126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.703252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.703279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.703426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.703453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.703559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.703585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.703675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.703701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.703819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.703844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.703936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.703962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.704055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.704080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.704170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.704196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.704315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.704341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.704426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.704451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.704539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.704564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.704643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.704668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.704763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.704789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.704907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.704933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.705042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.705067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.705207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.705233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.705349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.705376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.705467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.705493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.705640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.705666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.705776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.705802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.705893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.705918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.706011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.706038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.706179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.706220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.706356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.706385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.706483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.706509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.336 [2024-11-19 10:56:08.706600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.336 [2024-11-19 10:56:08.706632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.336 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.706728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.706758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.706839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.706866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.706948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.706975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.707091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.707118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.707215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.707242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.707367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.707395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.707479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.707504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.707618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.707644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.707757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.707783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.707919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.707944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.708050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.708075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.708176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.708205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.708286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.708320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.708455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.708481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.708567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.708593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.708708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.708734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.708817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.708845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.708960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.708988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.709070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.709096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.709173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.709199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.709283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.709314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.709406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.709431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.709566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.709592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.709669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.709695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.709832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.709857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.709944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.709971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.710091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.710120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.710237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.710263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.710384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.710411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.710518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.710544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.710654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.710680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.710797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.710823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.710906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.710932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.711017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.711043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.711128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.711155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.711240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.711266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.711361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.711388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.711479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.711504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.711593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.711619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.711723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.711754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.711834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.711859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.711941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.711967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.712080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.712106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.712188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.712213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.712323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.712350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.712437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.712463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.712548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.712574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.712678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.712704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.712850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.712876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.712969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.712994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.713080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.713108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.713204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.713230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.713339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.713365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.713487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.713513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.713628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.713653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.713767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.337 [2024-11-19 10:56:08.713792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.337 qpair failed and we were unable to recover it. 00:28:21.337 [2024-11-19 10:56:08.713880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.713906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.714014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.714040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.714121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.714147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.714223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.714250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.714351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.714378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.714466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.714493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.714608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.714634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.714711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.714737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.714844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.714870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.714961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.714987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.715101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.715127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.715246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.715272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.715373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.715401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.715648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.715674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.715780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.715806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.715896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.715922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.716040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.716066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.716184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.716210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.716328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.716356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.716473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.716499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.716585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.716611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.716696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.716722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.716830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.716856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.716939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.716970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.717059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.717085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.717221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.717246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.717335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.717362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.717475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.717501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.717579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.717605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.717713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.717739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.717869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.717896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.717989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.718015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.718095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.718121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.718211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.718237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.718355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.718382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.718504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.718529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.718621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.718647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.718794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.718820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.718953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.718978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.719092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.719117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.719233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.719259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.719389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.719416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.719526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.719552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.719673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.719700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.719838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.719864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.719976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.720002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.720116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.720141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.720229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.720254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.720372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.720398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.720509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.720536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.720657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.720683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.720794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.720820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.720956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.720981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.721096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.721121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.721314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.721340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.721455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.721482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.721593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.721618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.721731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.721757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.338 qpair failed and we were unable to recover it. 00:28:21.338 [2024-11-19 10:56:08.721877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.338 [2024-11-19 10:56:08.721903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.722017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.722042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.722159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.722184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.722273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.722308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.722423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.722448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.722545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.722575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.722687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.722713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.722794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.722820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.722904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.722929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.723067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.723092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.723208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.723234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.723328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.723355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.723445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.723471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.723556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.723581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.723702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.723727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.723820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.723845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.723960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.723985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.724077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.724102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.724212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.724237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.724386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.724413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.724500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.724526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.724640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.724665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.724768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.724794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.724933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.724958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.725069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.725095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.725182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.725208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.725317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.725357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.725457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.725484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.725577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.725604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.725722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.725748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.725832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.725858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.725974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.726001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.726095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.726122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.726275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.726301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.726422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.726448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.726566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.726593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.726680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.726705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.726822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.726848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.726927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.726953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.727088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.727114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.727226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.727251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.727350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.727376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.727515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.727541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.727662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.727688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.727799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.727826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.727940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.727970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.728078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.728103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.728217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.728243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.728352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.728379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.728465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.728491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.728580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.728606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.728691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.728717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.728856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.728882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.728968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.728993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.339 [2024-11-19 10:56:08.729127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.339 [2024-11-19 10:56:08.729153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.339 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.729245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.729270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.729391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.729417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.729534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.729561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.729706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.729732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.729854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.729880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.729962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.729988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.730127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.730152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.730282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.730329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.730460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.730488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.730574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.730604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.730696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.730723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.730839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.730865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.730981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.731008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.731097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.731123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.731234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.731260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.731381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.731408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.731504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.731530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.731672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.731713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.731808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.731835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.731915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.731943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.732082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.732107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.732195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.732221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.732315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.732342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.732457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.732484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.732570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.732596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.732684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.732711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.732801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.732827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.732910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.732936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.733019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.733045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.733132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.733158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.733238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.733269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.733365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.733392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.733502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.733528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.733611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.733638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.733748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.733774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.733886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.733912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.734025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.734052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.734167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.734193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.734280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.734311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.734398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.734424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.734514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.734540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.734648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.734674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.734799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.734825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.734933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.734959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.735104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.735130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.735210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.735236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.735344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.735383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.735496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.735535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.735651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.735679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.735776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.735802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.735892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.735918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.736056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.736082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.736192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.736217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.736342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.736368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.736481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.736506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.736631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.736656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.736769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.736795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.736912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.340 [2024-11-19 10:56:08.736938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.340 qpair failed and we were unable to recover it. 00:28:21.340 [2024-11-19 10:56:08.737047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.737073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.737170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.737209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.737332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.737360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.737482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.737508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.737600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.737625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.737706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.737732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.737842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.737867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.737988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.738015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.738109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.738135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.738261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.738287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.738405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.738431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.738544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.738570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.738641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.738666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.738768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.738794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.738876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.738901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.738989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.739014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.739099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.739125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.739211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.739236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.739354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.739380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.739463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.739488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.739568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.739593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.739686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.739711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.739798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.739829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.739968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.740009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.740173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.740212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.740334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.740360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.740453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.740479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.740559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.740585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.740670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.740695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.740809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.740836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.740953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.740979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.741125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.741155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.741241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.741269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.741418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.741446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.741534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.741562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.741645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.741672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.741814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.741840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.741940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.741967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.742055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.742082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.742196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.742228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.742352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.742378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.742463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.742489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.742601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.742627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.742708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.742736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.742847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.742873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.742986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.743012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.743105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.743131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.743216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.743242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.743325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.743351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.743491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.743517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.743610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.743638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.743728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.743755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.743851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.743878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.744024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.744049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.744163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.744188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.744295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.744331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.744418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.744444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.744581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.744607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.744693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.744720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.341 qpair failed and we were unable to recover it. 00:28:21.341 [2024-11-19 10:56:08.744813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.341 [2024-11-19 10:56:08.744838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.744977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.745002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.745122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.745150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.745239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.745266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.745352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.745379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.745461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.745489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.745601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.745627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.745718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.745745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.745851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.745877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.745963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.745991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.746148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.746188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.746315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.746343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.746457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.746482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.746571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.746597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.746730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.746756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.746840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.746866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.746948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.746974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.747066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.747096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.747190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.747216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.747342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.747370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.747457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.747488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.747604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.747630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.747708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.747734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.747822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.747849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.747934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.747960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.748041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.748069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.748164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.748191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.748281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.748312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.748439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.748466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.748581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.748607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.748698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.748723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.748860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.748886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.748996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.749024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.749113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.749140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.749291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.749325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.749417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.749444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.749560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.749586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.749699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.749725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.749810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.749837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.749952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.749979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.750060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.750087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.750182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.750208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.750338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.750378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.750488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.750515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.750629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.750655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.750741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.750767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.750856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.750883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.750973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.751005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.751119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.751145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.751248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.751287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.751381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.751410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.751551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.751578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.751670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.751698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.751815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.751843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.751974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.752002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.342 qpair failed and we were unable to recover it. 00:28:21.342 [2024-11-19 10:56:08.752094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.342 [2024-11-19 10:56:08.752121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.752239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.752265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.752372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.752400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.752487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.752513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.752598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.752624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.752732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.752758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.752878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.752905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.753042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.753069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.753160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.753199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.753289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.753332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.753423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.753452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.753531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.753559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.753638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.753664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.753760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.753786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.753871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.753899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.753977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.754003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.754088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.754115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.754196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.754223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.754313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.754343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.754428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.754453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.754536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.754562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.754636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.754662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.754730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.343 [2024-11-19 10:56:08.754748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.754765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.343 [2024-11-19 10:56:08.754775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.343 [2024-11-19 10:56:08.754780] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.754792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.343 [2024-11-19 10:56:08.754802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.343 [2024-11-19 10:56:08.754871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.754895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.754986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.755013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.755091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.755116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.755199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.755225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.755363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.755389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.755480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.755507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.755625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.755651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.755738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.755769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.755909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.755935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.756048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.756075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.756191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.756216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.756298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.756333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.756424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.756450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.756402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:21.343 [2024-11-19 10:56:08.756433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:21.343 [2024-11-19 10:56:08.756539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.756564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.756482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:21.343 [2024-11-19 10:56:08.756485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:21.343 [2024-11-19 10:56:08.756650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.756674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.756789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.756815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.756904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.756929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.757015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.757042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.757131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.757157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.757270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.757295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.757422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.757448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.757538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.757564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.757661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.757686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.757769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.757795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.757903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.757929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.758040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.758066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.758157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.758185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.758265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.758291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.758380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.758406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.758513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.758538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.758619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.758645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.758736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.758762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.758849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.343 [2024-11-19 10:56:08.758876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.343 qpair failed and we were unable to recover it. 00:28:21.343 [2024-11-19 10:56:08.758997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.759036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.759123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.759152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.759234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.759261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.759350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.759377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.759495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.759521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.759627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.759654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.759739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.759765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.759847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.759872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.760006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.760032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.760111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.760137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.760213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.760238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.760353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.760381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.760468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.760495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.760570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.760604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.760725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.760752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.760913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.760940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.761047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.761073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.761150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.761176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.761269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.761325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.761428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.761455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.761569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.761595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.761683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.761709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.761790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.761816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.761925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.761951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.762062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.762090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.762224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.762264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.762384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.762412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.762496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.762522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.762609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.762634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.762742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.762768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.762878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.762903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.762991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.763019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.763123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.763163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.763277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.763310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.763402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.763429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.763516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.763542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.763631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.763656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.763744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.763770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.763870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.763898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.763997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.764036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.764157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.764191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.764325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.764353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.764448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.764475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.764565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.764592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.764680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.764707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.764811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.764838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.764921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.764949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.765041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.765081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.765173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.765202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.765295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.765332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.765421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.765447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.765541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.765567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.765643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.765669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.765750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.765777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.765900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.765929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.766024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.766050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.766133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.766159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.766243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.766269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.766403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.766432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.344 qpair failed and we were unable to recover it. 00:28:21.344 [2024-11-19 10:56:08.766516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.344 [2024-11-19 10:56:08.766543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.766656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.766683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.766767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.766795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.766918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.766945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.767063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.767091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.767171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.767197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.767298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.767341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.767436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.767464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.767551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.767578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.767668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.767696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.767776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.767802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.767887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.767914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.768004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.768030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.768114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.768141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.768251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.768277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.768373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.768400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.768483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.768509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.768595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.768621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.768702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.768729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.768824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.768852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.768934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.768961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.769051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.769085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.769172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.769197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.769279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.769310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.769396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.769423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.769508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.769534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.769614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.769640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.769719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.769744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.769834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.769862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.769978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.770004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.770092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.770120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.770238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.770265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.770350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.770377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.770463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.770489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.770586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.770614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.770702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.770727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.770821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.770849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.770962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.770989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.771123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.771161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.771258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.771293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.771387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.771413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.771534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.771562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.771652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.771678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.771765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.771793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.771883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.771910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.771998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.772026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.772110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.772136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.772216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.772242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.772329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.772356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.772434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.772461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.772545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.772571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.772648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.772674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.772754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.772780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.772876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.772901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.345 qpair failed and we were unable to recover it. 00:28:21.345 [2024-11-19 10:56:08.772981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.345 [2024-11-19 10:56:08.773008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.773144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.773172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.773268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.773296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.773389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.773416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.773497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.773523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.773607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.773634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.773714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.773741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.773832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.773865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.773986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.774012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.774104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.774131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.774211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.774237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.774348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.774374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.774469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.774498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.774618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.774644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.774762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.774787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.774897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.774922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.775006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.775034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.775167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.775206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.775289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.775323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.775414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.775440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.775531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.775557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.775642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.775667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.775749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.775774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.775854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.775880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.775976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.776005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.776094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.776122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.776217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.776245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.776335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.776362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.776441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.776467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.776641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.776668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.776752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.776778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.776867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.776893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.776981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.777007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.777095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.777122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.777246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.777285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.777386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.777414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.777531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.777556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.777635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.777661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.777753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.777779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.777886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.777911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.777999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.778026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.778115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.778142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.778245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.778271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.778359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.778386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.778463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.778489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.778570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.778597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.778705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.778732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.778810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.778836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.778925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.778950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.779048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.779074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.779168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.779207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.779311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.779340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.779440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.779466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.779550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.779576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.779679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.779706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.779794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.779823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.779909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.779936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.780029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.780055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.780130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.780156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.780231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.346 [2024-11-19 10:56:08.780257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.346 qpair failed and we were unable to recover it. 00:28:21.346 [2024-11-19 10:56:08.780372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.780412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.780516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.780544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.780644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.780672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.780788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.780815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.780902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.780928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.781037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.781063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.781161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.781189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.781272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.781299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.781428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.781454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.781541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.781567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.781644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.781670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.781778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.781804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.781892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.781918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.782004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.782030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.782117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.782149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.782231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.782257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.782343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.782370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.782462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.782488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.782574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.782600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.782705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.782731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.782841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.782866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.782956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.782981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.783057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.783083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.783182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.783208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.783296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.783327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.783437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.783463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.783547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.783573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.783650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.783675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.783794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.783820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.783926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.783952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.784037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.784064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.784143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.784169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.784247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.784274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.784384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.784424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.784533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.784572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.784676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.784714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.784803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.784830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.784909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.784935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.785023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.785048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.785137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.785163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.785290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.785339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.785434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.785463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.785557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.785585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.785666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.785692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.785772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.785797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.785879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.785906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.785988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.786015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.786110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.786150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.786247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.786275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.786373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.786402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.786481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.786507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.786627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.786653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.786742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.786769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.786855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.786882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.786975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.787010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.347 [2024-11-19 10:56:08.787105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.347 [2024-11-19 10:56:08.787133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.347 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.787213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.787240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.787328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.787356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.787454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.787482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.787561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.787587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.787696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.787722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.787833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.787861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.787944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.787970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.788056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.788082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.788193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.788221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.788353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.788380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.788489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.788515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.788596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.788624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.788712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.788737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.788862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.788888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.788976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.789001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.789087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.789112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.789239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.789279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.789382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.789410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.789494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.789520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.789636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.789662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.789753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.789780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.789868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.789894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.789984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.790012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.790095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.790120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.790205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.790232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.790320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.790353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.790437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.790463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.790543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.790569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.790651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.790678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.790790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.790816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.790900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.790926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.791013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.791039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.791124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.791151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.791252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.791292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.791400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.791428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.791532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.791571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.791661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.791689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.791773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.791800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.791896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.791924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.792015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.792042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.792126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.792152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.792240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.792266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.792375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.792404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.792492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.792518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.792598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.792629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.792722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.792748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.792829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.792855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.792935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.792960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.793052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.793091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.793201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.793240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.793338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.793366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.793454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.793481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.348 [2024-11-19 10:56:08.793600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.348 [2024-11-19 10:56:08.793627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.348 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.793714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.793741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.793830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.793857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.793946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.793975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.794070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.794110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.794192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.794218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.794301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.794336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.794418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.794444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.794549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.794575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.794656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.794682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.794764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.794790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.794895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.794920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.795030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.795058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.795166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.795210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.795309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.795339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.795450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.795477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.795553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.795579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.795670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.795698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.795809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.795835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.795959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.795987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.796132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.796171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.796267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.796310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.796399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.796424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.796510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.796535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.796620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.796646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.796729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.796756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.796841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.796867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.796958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.796984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.797072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.797098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.797218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.797247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.797334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.797362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.797470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.797496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.797577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.797603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.797691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.797716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.797825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.797851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.797944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.797971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.798054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.798080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.798173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.798201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.798276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.798311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.798433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.798460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.798542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.798574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.798688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.798716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.798810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.798836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.798950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.798978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.799111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.799137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.799222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.799248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.799336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.799363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.799445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.799471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.799554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.799581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.799701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.799728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.799818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.799845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.799922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.799949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.800028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.800054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.800142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.800168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.800271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.800322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.800434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.800473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.800584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.800627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.349 qpair failed and we were unable to recover it. 00:28:21.349 [2024-11-19 10:56:08.800716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.349 [2024-11-19 10:56:08.800743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.800852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.800878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.800960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.800985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.801060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.801085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.801197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.801222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.801331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.801357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.801441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.801466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.801581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.801607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.801686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.801711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.801796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.801821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.801910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.801935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.802017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.802042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.802131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.802157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.802240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.802265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.802353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.802380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.802494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.802521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.802610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.802635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.802723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.802748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.802826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.802850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.803020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.803046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.803153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.803178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.803273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.803308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.803394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.803421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.803509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.803536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.803651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.803677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.803792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.803819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.803935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.803962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.804049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.804076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.804164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.804191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.804316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.804343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.804467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.804494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.804575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.804601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.804722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.804748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.804834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.804862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.804946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.804971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.805201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.805227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.805308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.805334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.805429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.805454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.805541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.805566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.805652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.805679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.805764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.805790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.805885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.805912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.805993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.806018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.806100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.806126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.806235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.806262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.806352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.806380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.806475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.806514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.806599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.806627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.806725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.806752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.806844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.806871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.806975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.807020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.807114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.807143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.807238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.807265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.807356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.807383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.807468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.807494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.807581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.807606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33b4000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.807690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.807718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.807801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.807827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.807914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.807940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.808026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.350 [2024-11-19 10:56:08.808052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.350 qpair failed and we were unable to recover it. 00:28:21.350 [2024-11-19 10:56:08.808142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.808169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.808270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.808316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.808406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.808433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.808518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.808545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.808647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.808672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.808754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.808779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.808852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.808877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.808962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.808987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.809078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.809102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.809182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.809207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.809284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.809322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.809406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.809431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.809510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.809534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.809608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.809633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.809717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.809741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.809934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.809959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.810050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.810089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.810176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.810210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.810314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.810341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.810424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.810451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.810539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.810566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.810663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.810691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.810777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.810803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.810888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.810913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.810998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.811024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.811101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.811126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.811262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.811287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 A controller has encountered a failure and is being reset. 00:28:21.351 [2024-11-19 10:56:08.811399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.811426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.811505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.811532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.811618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.811644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.811733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.811759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.811848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.811875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.811958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.811984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.812060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.812086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.812168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.812194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.812281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.812313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.812429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.812454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.812541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.812566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.812694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.812720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.812800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.812825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.812908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.812933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.813008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.813032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.813139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.813167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.813263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.813290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.813390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.813417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.813499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.813525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.813612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.813639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.813752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.813777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.813861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.813888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.813985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.814012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.814121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.814147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.351 qpair failed and we were unable to recover it. 00:28:21.351 [2024-11-19 10:56:08.814231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.351 [2024-11-19 10:56:08.814257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.814344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.814371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.814462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.814488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.814599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.814625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.814720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.814747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.814833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.814860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.814938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.814969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.815059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.815087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.815176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.815202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.815280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.815311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.815402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.815427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.815506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.815530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.815609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.815633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.815713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.815737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.815828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.815853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.815967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.815992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.816065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.816089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.816182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.816222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.816316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.816351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.816443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.816469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.816558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.816584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.816670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.816696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.816786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.816812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.816889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.816916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.816998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.817024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.817115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.817141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.817230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.817258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.817361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.817391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.817473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.817499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.817607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.817633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.817720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.817746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.817835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.817862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.817954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.817980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.818069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.818100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.818185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.818211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33a8000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.818293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.818328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.818428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.818454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33ac000b90 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.818546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.818573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.818660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.818685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.818770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.818797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.818908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.818933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.819014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.819041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.819116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.819141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.819253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.819277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.819403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.819428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.819508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.819532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.819653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.819678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.819761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.819785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.819880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.819904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.819998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.820032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.820117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.820142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.820225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.820255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdbfa0 with addr=10.0.0.2, port=4420 00:28:21.352 qpair failed and we were unable to recover it. 00:28:21.352 [2024-11-19 10:56:08.820406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.352 [2024-11-19 10:56:08.820457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9f30 with addr=10.0.0.2, port=4420 00:28:21.352 [2024-11-19 10:56:08.820479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9f30 is same with the state(6) to be set 00:28:21.352 [2024-11-19 10:56:08.820507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9f30 (9): Bad file descriptor 00:28:21.352 [2024-11-19 10:56:08.820526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:21.352 [2024-11-19 10:56:08.820540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:21.352 [2024-11-19 10:56:08.820558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:21.352 Unable to reset the controller. 00:28:21.352 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:21.352 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:21.352 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:21.352 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:21.352 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.352 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.352 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:21.352 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.352 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.610 Malloc0 00:28:21.610 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.610 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:21.610 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.610 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.610 [2024-11-19 10:56:08.941221] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.610 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.610 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:21.610 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.610 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.610 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.610 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:21.610 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.610 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.610 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.610 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:21.610 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.610 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.610 [2024-11-19 10:56:08.969480] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.610 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.610 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:21.610 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.610 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.610 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.610 10:56:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1457263 00:28:22.560 Controller properly reset. 00:28:27.817 Initializing NVMe Controllers 00:28:27.817 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:27.817 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:27.817 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:27.817 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:27.817 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:27.817 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:27.817 Initialization complete. Launching workers. 00:28:27.817 Starting thread on core 1 00:28:27.817 Starting thread on core 2 00:28:27.817 Starting thread on core 3 00:28:27.817 Starting thread on core 0 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:27.818 00:28:27.818 real 0m10.646s 00:28:27.818 user 0m34.376s 00:28:27.818 sys 0m7.222s 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.818 ************************************ 00:28:27.818 END TEST nvmf_target_disconnect_tc2 00:28:27.818 ************************************ 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:27.818 rmmod nvme_tcp 00:28:27.818 rmmod nvme_fabrics 00:28:27.818 rmmod nvme_keyring 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1457670 ']' 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1457670 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1457670 ']' 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1457670 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1457670 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1457670' 00:28:27.818 killing process with pid 1457670 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1457670 00:28:27.818 10:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1457670 00:28:27.818 10:56:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:27.818 10:56:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:27.818 10:56:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:27.818 10:56:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:27.818 10:56:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:28:27.818 10:56:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:27.818 10:56:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:28:27.818 10:56:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:27.818 10:56:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:27.818 10:56:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.818 10:56:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:27.818 10:56:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.722 10:56:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:29.722 00:28:29.722 real 0m15.633s 00:28:29.722 user 0m59.948s 00:28:29.722 sys 0m9.751s 00:28:29.722 10:56:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:29.722 10:56:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:29.722 ************************************ 00:28:29.722 END TEST nvmf_target_disconnect 00:28:29.722 ************************************ 00:28:29.722 10:56:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:29.722 00:28:29.722 real 5m6.783s 00:28:29.722 user 11m6.332s 00:28:29.722 sys 1m16.425s 00:28:29.722 10:56:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:29.722 10:56:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.722 ************************************ 00:28:29.722 END TEST nvmf_host 00:28:29.722 ************************************ 00:28:29.722 10:56:17 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:29.722 10:56:17 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:29.722 10:56:17 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:29.722 10:56:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:29.722 10:56:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:29.722 10:56:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:29.722 ************************************ 00:28:29.722 START TEST nvmf_target_core_interrupt_mode 00:28:29.722 ************************************ 00:28:29.722 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:29.722 * Looking for test storage... 00:28:29.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:29.722 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:29.722 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:28:29.722 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:29.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.981 --rc genhtml_branch_coverage=1 00:28:29.981 --rc genhtml_function_coverage=1 00:28:29.981 --rc genhtml_legend=1 00:28:29.981 --rc geninfo_all_blocks=1 00:28:29.981 --rc geninfo_unexecuted_blocks=1 00:28:29.981 00:28:29.981 ' 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:29.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.981 --rc genhtml_branch_coverage=1 00:28:29.981 --rc genhtml_function_coverage=1 00:28:29.981 --rc genhtml_legend=1 00:28:29.981 --rc geninfo_all_blocks=1 00:28:29.981 --rc geninfo_unexecuted_blocks=1 00:28:29.981 00:28:29.981 ' 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:29.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.981 --rc genhtml_branch_coverage=1 00:28:29.981 --rc genhtml_function_coverage=1 00:28:29.981 --rc genhtml_legend=1 00:28:29.981 --rc geninfo_all_blocks=1 00:28:29.981 --rc geninfo_unexecuted_blocks=1 00:28:29.981 00:28:29.981 ' 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:29.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.981 --rc genhtml_branch_coverage=1 00:28:29.981 --rc genhtml_function_coverage=1 00:28:29.981 --rc genhtml_legend=1 00:28:29.981 --rc geninfo_all_blocks=1 00:28:29.981 --rc geninfo_unexecuted_blocks=1 00:28:29.981 00:28:29.981 ' 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:29.981 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:29.982 ************************************ 00:28:29.982 START TEST nvmf_abort 00:28:29.982 ************************************ 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:29.982 * Looking for test storage... 00:28:29.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:29.982 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:29.983 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:29.983 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:29.983 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:29.983 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:29.983 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:29.983 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:29.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.983 --rc genhtml_branch_coverage=1 00:28:29.983 --rc genhtml_function_coverage=1 00:28:29.983 --rc genhtml_legend=1 00:28:29.983 --rc geninfo_all_blocks=1 00:28:29.983 --rc geninfo_unexecuted_blocks=1 00:28:29.983 00:28:29.983 ' 00:28:29.983 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:29.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.983 --rc genhtml_branch_coverage=1 00:28:29.983 --rc genhtml_function_coverage=1 00:28:29.983 --rc genhtml_legend=1 00:28:29.983 --rc geninfo_all_blocks=1 00:28:29.983 --rc geninfo_unexecuted_blocks=1 00:28:29.983 00:28:29.983 ' 00:28:29.983 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:29.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.983 --rc genhtml_branch_coverage=1 00:28:29.983 --rc genhtml_function_coverage=1 00:28:29.983 --rc genhtml_legend=1 00:28:29.983 --rc geninfo_all_blocks=1 00:28:29.983 --rc geninfo_unexecuted_blocks=1 00:28:29.983 00:28:29.983 ' 00:28:29.983 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:29.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.983 --rc genhtml_branch_coverage=1 00:28:29.983 --rc genhtml_function_coverage=1 00:28:29.983 --rc genhtml_legend=1 00:28:29.983 --rc geninfo_all_blocks=1 00:28:29.983 --rc geninfo_unexecuted_blocks=1 00:28:29.983 00:28:29.983 ' 00:28:29.983 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:29.983 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:29.983 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:29.983 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:29.983 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:29.983 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:29.983 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:29.983 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:29.983 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:29.983 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:29.983 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:29.983 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.242 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:30.242 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:30.242 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.242 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.242 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:30.242 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:30.242 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:30.242 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:30.242 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.242 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.242 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.242 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.242 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:30.243 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.165 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:32.165 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:32.165 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:32.165 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:32.165 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:32.165 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:32.165 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:32.165 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:32.165 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:32.165 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:32.165 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:32.165 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:32.165 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:32.165 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:32.165 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:32.166 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:32.166 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:32.166 Found net devices under 0000:09:00.0: cvl_0_0 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:32.166 Found net devices under 0000:09:00.1: cvl_0_1 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:32.166 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:32.167 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:32.167 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:32.167 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:32.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:32.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:28:32.167 00:28:32.167 --- 10.0.0.2 ping statistics --- 00:28:32.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.167 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:28:32.167 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:32.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:32.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:28:32.167 00:28:32.167 --- 10.0.0.1 ping statistics --- 00:28:32.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.167 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:28:32.167 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:32.167 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:32.167 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:32.167 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:32.167 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:32.167 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:32.167 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:32.167 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:32.167 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:32.430 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:32.430 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:32.430 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:32.430 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.430 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1460483 00:28:32.430 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1460483 00:28:32.430 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1460483 ']' 00:28:32.430 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:32.430 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.430 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:32.430 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.430 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:32.430 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.430 [2024-11-19 10:56:19.828361] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:32.430 [2024-11-19 10:56:19.829525] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:28:32.430 [2024-11-19 10:56:19.829608] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:32.430 [2024-11-19 10:56:19.899445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:32.430 [2024-11-19 10:56:19.953569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:32.430 [2024-11-19 10:56:19.953621] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:32.430 [2024-11-19 10:56:19.953648] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:32.430 [2024-11-19 10:56:19.953660] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:32.430 [2024-11-19 10:56:19.953669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:32.430 [2024-11-19 10:56:19.955153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:32.430 [2024-11-19 10:56:19.955263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:32.430 [2024-11-19 10:56:19.955267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.430 [2024-11-19 10:56:20.047114] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:32.430 [2024-11-19 10:56:20.047392] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:32.430 [2024-11-19 10:56:20.047416] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:32.430 [2024-11-19 10:56:20.047639] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:32.688 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:32.688 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:32.688 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:32.688 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:32.688 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.688 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:32.688 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:32.688 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.688 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.688 [2024-11-19 10:56:20.100078] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.688 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.689 Malloc0 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.689 Delay0 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.689 [2024-11-19 10:56:20.176192] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.689 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:32.946 [2024-11-19 10:56:20.327463] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:34.843 Initializing NVMe Controllers 00:28:34.843 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:34.843 controller IO queue size 128 less than required 00:28:34.843 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:34.843 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:34.843 Initialization complete. Launching workers. 00:28:34.843 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 26525 00:28:34.843 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 26582, failed to submit 66 00:28:34.843 success 26525, unsuccessful 57, failed 0 00:28:34.843 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:34.843 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.843 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:34.843 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.843 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:34.843 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:34.843 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:34.843 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:34.843 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:34.843 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:34.843 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:34.843 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:34.843 rmmod nvme_tcp 00:28:34.843 rmmod nvme_fabrics 00:28:35.101 rmmod nvme_keyring 00:28:35.101 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:35.101 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:35.101 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:35.101 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1460483 ']' 00:28:35.101 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1460483 00:28:35.101 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1460483 ']' 00:28:35.101 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1460483 00:28:35.101 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:35.101 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:35.101 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1460483 00:28:35.101 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:35.101 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:35.101 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1460483' 00:28:35.101 killing process with pid 1460483 00:28:35.101 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1460483 00:28:35.101 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1460483 00:28:35.359 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:35.359 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:35.359 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:35.359 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:35.359 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:35.359 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:35.359 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:35.359 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:35.359 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:35.359 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.359 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:35.359 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.259 10:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:37.259 00:28:37.259 real 0m7.358s 00:28:37.259 user 0m9.575s 00:28:37.259 sys 0m2.910s 00:28:37.259 10:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:37.259 10:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:37.259 ************************************ 00:28:37.259 END TEST nvmf_abort 00:28:37.259 ************************************ 00:28:37.259 10:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:37.259 10:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:37.259 10:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:37.259 10:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:37.259 ************************************ 00:28:37.259 START TEST nvmf_ns_hotplug_stress 00:28:37.259 ************************************ 00:28:37.259 10:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:37.518 * Looking for test storage... 00:28:37.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:37.518 10:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:37.518 10:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:28:37.518 10:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:37.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.518 --rc genhtml_branch_coverage=1 00:28:37.518 --rc genhtml_function_coverage=1 00:28:37.518 --rc genhtml_legend=1 00:28:37.518 --rc geninfo_all_blocks=1 00:28:37.518 --rc geninfo_unexecuted_blocks=1 00:28:37.518 00:28:37.518 ' 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:37.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.518 --rc genhtml_branch_coverage=1 00:28:37.518 --rc genhtml_function_coverage=1 00:28:37.518 --rc genhtml_legend=1 00:28:37.518 --rc geninfo_all_blocks=1 00:28:37.518 --rc geninfo_unexecuted_blocks=1 00:28:37.518 00:28:37.518 ' 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:37.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.518 --rc genhtml_branch_coverage=1 00:28:37.518 --rc genhtml_function_coverage=1 00:28:37.518 --rc genhtml_legend=1 00:28:37.518 --rc geninfo_all_blocks=1 00:28:37.518 --rc geninfo_unexecuted_blocks=1 00:28:37.518 00:28:37.518 ' 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:37.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.518 --rc genhtml_branch_coverage=1 00:28:37.518 --rc genhtml_function_coverage=1 00:28:37.518 --rc genhtml_legend=1 00:28:37.518 --rc geninfo_all_blocks=1 00:28:37.518 --rc geninfo_unexecuted_blocks=1 00:28:37.518 00:28:37.518 ' 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.518 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:37.519 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:40.048 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:40.048 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:40.048 Found net devices under 0000:09:00.0: cvl_0_0 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:40.048 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:40.049 Found net devices under 0000:09:00.1: cvl_0_1 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:40.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:40.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:28:40.049 00:28:40.049 --- 10.0.0.2 ping statistics --- 00:28:40.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:40.049 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:40.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:40.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:28:40.049 00:28:40.049 --- 10.0.0.1 ping statistics --- 00:28:40.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:40.049 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1462824 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1462824 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1462824 ']' 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:40.049 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:40.049 [2024-11-19 10:56:27.410875] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:40.049 [2024-11-19 10:56:27.411902] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:28:40.049 [2024-11-19 10:56:27.411959] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:40.049 [2024-11-19 10:56:27.482444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:40.049 [2024-11-19 10:56:27.540723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:40.049 [2024-11-19 10:56:27.540775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:40.049 [2024-11-19 10:56:27.540803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:40.049 [2024-11-19 10:56:27.540814] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:40.049 [2024-11-19 10:56:27.540823] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:40.049 [2024-11-19 10:56:27.542382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:40.049 [2024-11-19 10:56:27.542411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:40.049 [2024-11-19 10:56:27.542415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.049 [2024-11-19 10:56:27.629112] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:40.049 [2024-11-19 10:56:27.629355] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:40.050 [2024-11-19 10:56:27.629377] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:40.050 [2024-11-19 10:56:27.629642] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:40.050 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:40.050 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:28:40.050 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:40.050 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:40.050 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:40.330 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:40.330 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:40.330 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:40.330 [2024-11-19 10:56:27.919162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.330 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:40.894 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:41.152 [2024-11-19 10:56:28.575655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.152 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:41.409 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:41.696 Malloc0 00:28:41.696 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:41.951 Delay0 00:28:41.951 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:42.207 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:42.462 NULL1 00:28:42.462 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:42.718 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1463124 00:28:42.718 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:42.718 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:28:42.718 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:42.976 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:43.541 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:43.541 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:43.541 true 00:28:43.541 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:28:43.541 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:43.798 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:44.055 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:44.055 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:44.312 true 00:28:44.569 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:28:44.569 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:45.133 Read completed with error (sct=0, sc=11) 00:28:45.390 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:45.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:45.648 10:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:45.648 10:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:45.905 true 00:28:45.905 10:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:28:45.905 10:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:46.163 10:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:46.420 10:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:46.420 10:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:46.677 true 00:28:46.677 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:28:46.677 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:46.934 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:47.192 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:47.192 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:47.449 true 00:28:47.449 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:28:47.449 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:48.381 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:48.381 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:48.381 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:48.639 10:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:48.639 10:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:48.896 true 00:28:48.896 10:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:28:48.896 10:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:49.153 10:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:49.410 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:49.410 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:49.667 true 00:28:49.924 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:28:49.924 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.181 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:50.439 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:50.439 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:50.697 true 00:28:50.697 10:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:28:50.697 10:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:51.628 10:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:51.886 10:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:51.886 10:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:52.147 true 00:28:52.147 10:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:28:52.147 10:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:52.406 10:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:52.664 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:52.664 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:52.921 true 00:28:52.921 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:28:52.921 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:53.179 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:53.437 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:53.437 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:53.694 true 00:28:53.694 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:28:53.694 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:54.626 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:54.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:54.883 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:54.883 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:55.141 true 00:28:55.141 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:28:55.141 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:55.398 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:55.655 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:55.655 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:55.913 true 00:28:56.170 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:28:56.171 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:56.428 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:56.685 10:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:56.685 10:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:56.943 true 00:28:56.943 10:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:28:56.943 10:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:57.875 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:58.132 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:58.132 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:58.390 true 00:28:58.390 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:28:58.390 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:58.647 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:58.905 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:58.905 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:59.164 true 00:28:59.164 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:28:59.164 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:59.421 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:59.679 10:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:59.679 10:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:59.937 true 00:28:59.937 10:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:28:59.937 10:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.869 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:01.127 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:01.128 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:01.386 true 00:29:01.386 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:29:01.386 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:01.643 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:01.900 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:01.900 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:02.158 true 00:29:02.415 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:29:02.415 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:02.677 10:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:02.991 10:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:02.991 10:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:02.991 true 00:29:02.991 10:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:29:02.991 10:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:04.366 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.366 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:04.366 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:04.366 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:04.624 true 00:29:04.624 10:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:29:04.624 10:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:04.882 10:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:05.140 10:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:05.140 10:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:05.398 true 00:29:05.398 10:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:29:05.398 10:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:05.656 10:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:05.914 10:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:05.914 10:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:06.171 true 00:29:06.171 10:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:29:06.171 10:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:07.545 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:07.545 10:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:07.545 10:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:07.802 true 00:29:07.802 10:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:29:07.802 10:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:08.061 10:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:08.318 10:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:08.318 10:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:08.576 true 00:29:08.576 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:29:08.576 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:08.862 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:09.180 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:09.180 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:09.459 true 00:29:09.459 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:29:09.459 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:10.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:10.391 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:10.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:10.648 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:10.649 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:10.906 true 00:29:10.906 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:29:10.906 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:11.163 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:11.420 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:11.421 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:11.688 true 00:29:11.688 10:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:29:11.688 10:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:11.945 10:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:12.203 10:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:29:12.203 10:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:29:12.460 true 00:29:12.460 10:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:29:12.460 10:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:13.832 Initializing NVMe Controllers 00:29:13.832 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:13.832 Controller IO queue size 128, less than required. 00:29:13.832 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.832 Controller IO queue size 128, less than required. 00:29:13.832 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:13.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:13.832 Initialization complete. Launching workers. 00:29:13.832 ======================================================== 00:29:13.832 Latency(us) 00:29:13.832 Device Information : IOPS MiB/s Average min max 00:29:13.832 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 284.07 0.14 163488.26 3409.67 1014072.01 00:29:13.832 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7525.03 3.67 16959.88 2246.00 452798.92 00:29:13.832 ======================================================== 00:29:13.832 Total : 7809.09 3.81 22290.05 2246.00 1014072.01 00:29:13.832 00:29:13.832 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:13.832 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:29:13.832 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:29:14.090 true 00:29:14.090 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1463124 00:29:14.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1463124) - No such process 00:29:14.090 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1463124 00:29:14.090 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:14.348 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:14.606 10:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:14.606 10:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:14.606 10:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:14.606 10:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:14.606 10:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:14.863 null0 00:29:14.863 10:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:14.863 10:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:14.863 10:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:15.120 null1 00:29:15.120 10:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:15.121 10:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:15.121 10:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:15.377 null2 00:29:15.635 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:15.635 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:15.635 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:15.891 null3 00:29:15.891 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:15.891 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:15.891 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:16.148 null4 00:29:16.148 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:16.148 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:16.148 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:16.405 null5 00:29:16.405 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:16.405 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:16.405 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:16.663 null6 00:29:16.663 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:16.663 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:16.663 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:29:16.921 null7 00:29:16.921 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:16.921 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1467378 1467379 1467381 1467383 1467385 1467387 1467389 1467391 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:16.922 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:17.180 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:17.180 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:17.180 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:17.180 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:17.181 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:17.181 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:17.181 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:17.181 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:17.438 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:17.438 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:17.438 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:17.438 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:17.438 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:17.438 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:17.439 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:17.439 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:17.439 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:17.439 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:17.439 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:17.439 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:17.439 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:17.439 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:17.439 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:17.439 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:17.439 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:17.439 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:17.439 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:17.439 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:17.439 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:17.439 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:17.439 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:17.439 10:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:17.697 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:17.697 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:17.697 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:17.697 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:17.697 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:17.697 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:17.697 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:17.697 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:17.955 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:17.955 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:17.955 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:17.955 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:17.955 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:17.955 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:17.955 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:17.955 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:17.955 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:17.955 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:17.955 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:17.955 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:17.955 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:17.955 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:17.955 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:17.955 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:17.955 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:17.955 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:17.955 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:17.955 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:17.955 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:18.213 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:18.213 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:18.214 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:18.471 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:18.471 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:18.471 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:18.471 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:18.471 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:18.471 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:18.471 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:18.471 10:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:18.729 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:18.986 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:18.986 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:18.986 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:18.986 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:18.986 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:18.986 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:18.986 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:18.986 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:19.244 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.244 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.245 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:19.245 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.245 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.245 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:19.245 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.245 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.245 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:19.245 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.245 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.245 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:19.245 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.245 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.245 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:19.245 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.245 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.245 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:19.245 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.245 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.245 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:19.245 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.245 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.245 10:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:19.502 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:19.502 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:19.503 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:19.503 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:19.503 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:19.503 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:19.503 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:19.503 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:19.761 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.761 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.761 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:19.761 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.761 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.761 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:19.761 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.761 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.761 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:19.761 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.761 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.761 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.761 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.761 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:19.761 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:19.761 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.761 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.761 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:19.761 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.761 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.761 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:20.020 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.020 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.020 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:20.277 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:20.277 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:20.277 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:20.277 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:20.277 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:20.277 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:20.277 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:20.277 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.535 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:20.793 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:20.793 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:20.793 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:20.793 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:20.793 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:20.793 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:20.793 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:20.793 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:21.051 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.051 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.051 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:21.051 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.051 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.051 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:21.051 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.051 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.051 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:21.051 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.051 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.051 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:21.051 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.051 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.051 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:21.052 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.052 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.052 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:21.052 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.052 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.052 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:21.052 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.052 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.052 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:21.310 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:21.310 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:21.310 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:21.310 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:21.310 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.310 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:21.310 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:21.310 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:21.568 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.568 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.568 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:21.568 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.568 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.568 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:21.568 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.568 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.568 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:21.568 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.568 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.568 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:21.568 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.568 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.569 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:21.569 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.569 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.569 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:21.569 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.569 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.569 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:21.826 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.826 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.826 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:21.826 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:22.084 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:22.084 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:22.084 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:22.084 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:22.084 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:22.084 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:22.084 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:22.342 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.342 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.342 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:22.343 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.343 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.343 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:22.343 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.343 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.343 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:22.343 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.343 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.343 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:22.343 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.343 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.343 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:22.343 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.343 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.343 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:22.343 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.343 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.343 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:22.343 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.343 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.343 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:22.601 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:22.601 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:22.601 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:22.601 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:22.601 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:22.601 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:22.601 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:22.601 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:22.859 rmmod nvme_tcp 00:29:22.859 rmmod nvme_fabrics 00:29:22.859 rmmod nvme_keyring 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1462824 ']' 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1462824 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1462824 ']' 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1462824 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:22.859 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1462824 00:29:23.117 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:23.117 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:23.117 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1462824' 00:29:23.117 killing process with pid 1462824 00:29:23.117 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1462824 00:29:23.117 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1462824 00:29:23.377 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:23.377 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:23.377 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:23.377 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:23.377 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:29:23.377 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:23.377 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:29:23.377 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:23.377 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:23.377 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.377 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.377 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.280 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:25.280 00:29:25.280 real 0m47.929s 00:29:25.280 user 3m20.651s 00:29:25.280 sys 0m22.278s 00:29:25.280 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:25.281 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:25.281 ************************************ 00:29:25.281 END TEST nvmf_ns_hotplug_stress 00:29:25.281 ************************************ 00:29:25.281 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:25.281 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:25.281 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:25.281 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:25.281 ************************************ 00:29:25.281 START TEST nvmf_delete_subsystem 00:29:25.281 ************************************ 00:29:25.281 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:25.540 * Looking for test storage... 00:29:25.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:25.540 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:25.540 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:29:25.540 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:25.540 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:25.540 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:25.540 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:25.540 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:25.540 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:25.540 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:25.540 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:25.540 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:25.540 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:25.540 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:25.540 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:25.540 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:25.540 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:25.540 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:25.540 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:25.540 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:25.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.540 --rc genhtml_branch_coverage=1 00:29:25.540 --rc genhtml_function_coverage=1 00:29:25.540 --rc genhtml_legend=1 00:29:25.540 --rc geninfo_all_blocks=1 00:29:25.540 --rc geninfo_unexecuted_blocks=1 00:29:25.540 00:29:25.540 ' 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:25.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.540 --rc genhtml_branch_coverage=1 00:29:25.540 --rc genhtml_function_coverage=1 00:29:25.540 --rc genhtml_legend=1 00:29:25.540 --rc geninfo_all_blocks=1 00:29:25.540 --rc geninfo_unexecuted_blocks=1 00:29:25.540 00:29:25.540 ' 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:25.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.540 --rc genhtml_branch_coverage=1 00:29:25.540 --rc genhtml_function_coverage=1 00:29:25.540 --rc genhtml_legend=1 00:29:25.540 --rc geninfo_all_blocks=1 00:29:25.540 --rc geninfo_unexecuted_blocks=1 00:29:25.540 00:29:25.540 ' 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:25.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.540 --rc genhtml_branch_coverage=1 00:29:25.540 --rc genhtml_function_coverage=1 00:29:25.540 --rc genhtml_legend=1 00:29:25.540 --rc geninfo_all_blocks=1 00:29:25.540 --rc geninfo_unexecuted_blocks=1 00:29:25.540 00:29:25.540 ' 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:25.540 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:25.541 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:28.097 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:28.097 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:28.097 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:28.098 Found net devices under 0000:09:00.0: cvl_0_0 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:28.098 Found net devices under 0000:09:00.1: cvl_0_1 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:28.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:28.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:29:28.098 00:29:28.098 --- 10.0.0.2 ping statistics --- 00:29:28.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.098 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:28.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:28.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:29:28.098 00:29:28.098 --- 10.0.0.1 ping statistics --- 00:29:28.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.098 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1470655 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1470655 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1470655 ']' 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:28.098 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:28.098 [2024-11-19 10:57:15.358482] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:28.098 [2024-11-19 10:57:15.359601] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:29:28.098 [2024-11-19 10:57:15.359673] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:28.098 [2024-11-19 10:57:15.445650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:28.098 [2024-11-19 10:57:15.515509] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.099 [2024-11-19 10:57:15.515564] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.099 [2024-11-19 10:57:15.515598] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.099 [2024-11-19 10:57:15.515614] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.099 [2024-11-19 10:57:15.515627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.099 [2024-11-19 10:57:15.517174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.099 [2024-11-19 10:57:15.517183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.099 [2024-11-19 10:57:15.614137] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:28.099 [2024-11-19 10:57:15.614140] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:28.099 [2024-11-19 10:57:15.614484] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:28.099 [2024-11-19 10:57:15.665968] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:28.099 [2024-11-19 10:57:15.686167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:28.099 NULL1 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:28.099 Delay0 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1470752 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:28.099 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:28.356 [2024-11-19 10:57:15.768174] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:30.254 10:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:30.254 10:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.254 10:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 starting I/O failed: -6 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 starting I/O failed: -6 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 starting I/O failed: -6 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 starting I/O failed: -6 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 starting I/O failed: -6 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 starting I/O failed: -6 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 starting I/O failed: -6 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 starting I/O failed: -6 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 starting I/O failed: -6 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 starting I/O failed: -6 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 starting I/O failed: -6 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 starting I/O failed: -6 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 starting I/O failed: -6 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 starting I/O failed: -6 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 starting I/O failed: -6 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 starting I/O failed: -6 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 starting I/O failed: -6 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 starting I/O failed: -6 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 [2024-11-19 10:57:17.930599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x805860 is same with the state(6) to be set 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 starting I/O failed: -6 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 starting I/O failed: -6 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 starting I/O failed: -6 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 Write completed with error (sct=0, sc=8) 00:29:30.513 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Write completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Write completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Write completed with error (sct=0, sc=8) 00:29:30.514 Write completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Write completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Write completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Write completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Write completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Write completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Write completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Write completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Write completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Write completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Write completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Write completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Write completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Write completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Write completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Write completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Write completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Write completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 Read completed with error (sct=0, sc=8) 00:29:30.514 starting I/O failed: -6 00:29:30.514 [2024-11-19 10:57:17.931738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2a64000c40 is same with the state(6) to be set 00:29:31.449 [2024-11-19 10:57:18.904609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8069a0 is same with the state(6) to be set 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 [2024-11-19 10:57:18.931274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2a6400d800 is same with the state(6) to be set 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 [2024-11-19 10:57:18.932758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x805680 is same with the state(6) to be set 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 Write completed with error (sct=0, sc=8) 00:29:31.449 [2024-11-19 10:57:18.932953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8052c0 is same with the state(6) to be set 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.449 Read completed with error (sct=0, sc=8) 00:29:31.450 Write completed with error (sct=0, sc=8) 00:29:31.450 Write completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Write completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Write completed with error (sct=0, sc=8) 00:29:31.450 Write completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Write completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Write completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Write completed with error (sct=0, sc=8) 00:29:31.450 Read completed with error (sct=0, sc=8) 00:29:31.450 Write completed with error (sct=0, sc=8) 00:29:31.450 Write completed with error (sct=0, sc=8) 00:29:31.450 [2024-11-19 10:57:18.933210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2a6400d020 is same with the state(6) to be set 00:29:31.450 Initializing NVMe Controllers 00:29:31.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:31.450 Controller IO queue size 128, less than required. 00:29:31.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:31.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:31.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:31.450 Initialization complete. Launching workers. 00:29:31.450 ======================================================== 00:29:31.450 Latency(us) 00:29:31.450 Device Information : IOPS MiB/s Average min max 00:29:31.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.69 0.08 890668.68 600.18 1012032.02 00:29:31.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 184.10 0.09 905432.24 740.99 1012791.59 00:29:31.450 ======================================================== 00:29:31.450 Total : 356.79 0.17 898286.59 600.18 1012791.59 00:29:31.450 00:29:31.450 10:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.450 [2024-11-19 10:57:18.934387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8069a0 (9): Bad file descriptor 00:29:31.450 10:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:31.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:31.450 10:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1470752 00:29:31.450 10:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:32.023 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:32.023 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1470752 00:29:32.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1470752) - No such process 00:29:32.023 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1470752 00:29:32.023 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:32.023 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1470752 00:29:32.023 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:32.023 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:32.023 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:32.023 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:32.023 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1470752 00:29:32.023 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:32.023 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:32.023 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:32.024 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:32.024 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:32.024 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.024 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:32.024 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.024 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:32.024 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.024 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:32.024 [2024-11-19 10:57:19.454098] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:32.024 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.024 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:32.024 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.024 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:32.024 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.024 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1471197 00:29:32.024 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:32.024 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:32.024 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1471197 00:29:32.024 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:32.024 [2024-11-19 10:57:19.513898] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:32.589 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:32.589 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1471197 00:29:32.589 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:33.156 10:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:33.156 10:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1471197 00:29:33.156 10:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:33.413 10:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:33.413 10:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1471197 00:29:33.414 10:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:33.979 10:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:33.979 10:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1471197 00:29:33.979 10:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:34.544 10:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:34.544 10:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1471197 00:29:34.544 10:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:35.110 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:35.110 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1471197 00:29:35.110 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:35.110 Initializing NVMe Controllers 00:29:35.110 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:35.110 Controller IO queue size 128, less than required. 00:29:35.110 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:35.110 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:35.110 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:35.110 Initialization complete. Launching workers. 00:29:35.110 ======================================================== 00:29:35.110 Latency(us) 00:29:35.110 Device Information : IOPS MiB/s Average min max 00:29:35.110 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1006986.23 1000161.16 1046106.48 00:29:35.110 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005860.69 1000204.35 1044888.36 00:29:35.110 ======================================================== 00:29:35.110 Total : 256.00 0.12 1006423.46 1000161.16 1046106.48 00:29:35.110 00:29:35.368 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:35.368 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1471197 00:29:35.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1471197) - No such process 00:29:35.368 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1471197 00:29:35.368 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:35.368 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:35.368 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:35.368 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:35.368 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:35.368 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:35.368 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:35.368 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:35.627 rmmod nvme_tcp 00:29:35.627 rmmod nvme_fabrics 00:29:35.627 rmmod nvme_keyring 00:29:35.627 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:35.627 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:35.627 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:35.627 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1470655 ']' 00:29:35.627 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1470655 00:29:35.627 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1470655 ']' 00:29:35.627 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1470655 00:29:35.627 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:29:35.627 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:35.627 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1470655 00:29:35.627 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:35.627 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:35.627 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1470655' 00:29:35.627 killing process with pid 1470655 00:29:35.627 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1470655 00:29:35.627 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1470655 00:29:35.886 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:35.886 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:35.886 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:35.886 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:35.886 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:29:35.886 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:35.886 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:35.886 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:35.886 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:35.886 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.886 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:35.886 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.859 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:37.859 00:29:37.859 real 0m12.504s 00:29:37.859 user 0m24.840s 00:29:37.859 sys 0m3.701s 00:29:37.859 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:37.859 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:37.859 ************************************ 00:29:37.859 END TEST nvmf_delete_subsystem 00:29:37.859 ************************************ 00:29:37.859 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:37.859 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:37.859 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:37.859 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:37.859 ************************************ 00:29:37.859 START TEST nvmf_host_management 00:29:37.859 ************************************ 00:29:37.859 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:37.859 * Looking for test storage... 00:29:37.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:37.859 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:37.859 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:29:37.859 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:38.118 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:38.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.118 --rc genhtml_branch_coverage=1 00:29:38.118 --rc genhtml_function_coverage=1 00:29:38.119 --rc genhtml_legend=1 00:29:38.119 --rc geninfo_all_blocks=1 00:29:38.119 --rc geninfo_unexecuted_blocks=1 00:29:38.119 00:29:38.119 ' 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:38.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.119 --rc genhtml_branch_coverage=1 00:29:38.119 --rc genhtml_function_coverage=1 00:29:38.119 --rc genhtml_legend=1 00:29:38.119 --rc geninfo_all_blocks=1 00:29:38.119 --rc geninfo_unexecuted_blocks=1 00:29:38.119 00:29:38.119 ' 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:38.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.119 --rc genhtml_branch_coverage=1 00:29:38.119 --rc genhtml_function_coverage=1 00:29:38.119 --rc genhtml_legend=1 00:29:38.119 --rc geninfo_all_blocks=1 00:29:38.119 --rc geninfo_unexecuted_blocks=1 00:29:38.119 00:29:38.119 ' 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:38.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.119 --rc genhtml_branch_coverage=1 00:29:38.119 --rc genhtml_function_coverage=1 00:29:38.119 --rc genhtml_legend=1 00:29:38.119 --rc geninfo_all_blocks=1 00:29:38.119 --rc geninfo_unexecuted_blocks=1 00:29:38.119 00:29:38.119 ' 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:38.119 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:38.120 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:38.120 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:38.120 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:38.120 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:38.120 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.120 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.120 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.120 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:38.120 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:38.120 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:38.120 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:40.649 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:40.649 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:40.649 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:40.650 Found net devices under 0000:09:00.0: cvl_0_0 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:40.650 Found net devices under 0000:09:00.1: cvl_0_1 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:40.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:40.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:29:40.650 00:29:40.650 --- 10.0.0.2 ping statistics --- 00:29:40.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.650 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:40.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:40.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:29:40.650 00:29:40.650 --- 10.0.0.1 ping statistics --- 00:29:40.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.650 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1473540 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1473540 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1473540 ']' 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.650 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:40.651 10:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:40.651 [2024-11-19 10:57:27.897540] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:40.651 [2024-11-19 10:57:27.898680] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:29:40.651 [2024-11-19 10:57:27.898742] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:40.651 [2024-11-19 10:57:27.972308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:40.651 [2024-11-19 10:57:28.030536] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:40.651 [2024-11-19 10:57:28.030583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:40.651 [2024-11-19 10:57:28.030612] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:40.651 [2024-11-19 10:57:28.030623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:40.651 [2024-11-19 10:57:28.030632] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:40.651 [2024-11-19 10:57:28.032015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:40.651 [2024-11-19 10:57:28.032120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:40.651 [2024-11-19 10:57:28.032213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:40.651 [2024-11-19 10:57:28.032220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.651 [2024-11-19 10:57:28.115683] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:40.651 [2024-11-19 10:57:28.115912] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:40.651 [2024-11-19 10:57:28.116211] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:40.651 [2024-11-19 10:57:28.116874] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:40.651 [2024-11-19 10:57:28.117112] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:40.651 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:40.651 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:40.651 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:40.651 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:40.651 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:40.651 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:40.651 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:40.651 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.651 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:40.651 [2024-11-19 10:57:28.169027] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:40.651 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.651 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:29:40.651 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:40.651 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:40.651 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:40.651 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:29:40.651 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:29:40.651 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.651 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:40.651 Malloc0 00:29:40.651 [2024-11-19 10:57:28.245139] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:40.651 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.651 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:29:40.651 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:40.651 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:40.909 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1473629 00:29:40.909 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1473629 /var/tmp/bdevperf.sock 00:29:40.909 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1473629 ']' 00:29:40.909 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:29:40.909 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:40.909 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:40.909 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:40.909 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:40.909 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:40.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:40.909 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:40.909 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:40.909 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:40.909 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:40.909 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:40.909 { 00:29:40.909 "params": { 00:29:40.909 "name": "Nvme$subsystem", 00:29:40.909 "trtype": "$TEST_TRANSPORT", 00:29:40.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.909 "adrfam": "ipv4", 00:29:40.909 "trsvcid": "$NVMF_PORT", 00:29:40.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.909 "hdgst": ${hdgst:-false}, 00:29:40.909 "ddgst": ${ddgst:-false} 00:29:40.909 }, 00:29:40.909 "method": "bdev_nvme_attach_controller" 00:29:40.909 } 00:29:40.909 EOF 00:29:40.909 )") 00:29:40.909 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:40.909 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:40.909 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:40.909 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:40.909 "params": { 00:29:40.909 "name": "Nvme0", 00:29:40.909 "trtype": "tcp", 00:29:40.909 "traddr": "10.0.0.2", 00:29:40.909 "adrfam": "ipv4", 00:29:40.909 "trsvcid": "4420", 00:29:40.909 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:40.909 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:40.909 "hdgst": false, 00:29:40.909 "ddgst": false 00:29:40.910 }, 00:29:40.910 "method": "bdev_nvme_attach_controller" 00:29:40.910 }' 00:29:40.910 [2024-11-19 10:57:28.331945] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:29:40.910 [2024-11-19 10:57:28.332025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473629 ] 00:29:40.910 [2024-11-19 10:57:28.404864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.910 [2024-11-19 10:57:28.465033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.168 Running I/O for 10 seconds... 00:29:41.168 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.168 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:41.168 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:41.168 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.168 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:41.168 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.168 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:41.168 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:29:41.168 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:41.168 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:29:41.168 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:29:41.168 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:29:41.168 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:29:41.168 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:41.168 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:41.168 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:41.168 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.168 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:41.168 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.168 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:29:41.168 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:29:41.168 10:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:29:41.426 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:29:41.426 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:41.426 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:41.426 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.426 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:41.426 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:41.688 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.688 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=546 00:29:41.688 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 546 -ge 100 ']' 00:29:41.688 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:29:41.688 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:29:41.688 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:29:41.688 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:41.688 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.688 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:41.688 [2024-11-19 10:57:29.069056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.688 [2024-11-19 10:57:29.069585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.069921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7c720 is same with the state(6) to be set 00:29:41.689 [2024-11-19 10:57:29.070017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.689 [2024-11-19 10:57:29.070058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.689 [2024-11-19 10:57:29.070086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.689 [2024-11-19 10:57:29.070103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.689 [2024-11-19 10:57:29.070119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.689 [2024-11-19 10:57:29.070133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.689 [2024-11-19 10:57:29.070149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.689 [2024-11-19 10:57:29.070164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.689 [2024-11-19 10:57:29.070179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.689 [2024-11-19 10:57:29.070193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.689 [2024-11-19 10:57:29.070208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.689 [2024-11-19 10:57:29.070222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.689 [2024-11-19 10:57:29.070238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.689 [2024-11-19 10:57:29.070252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.689 [2024-11-19 10:57:29.070267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.689 [2024-11-19 10:57:29.070282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.689 [2024-11-19 10:57:29.070298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.689 [2024-11-19 10:57:29.070322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.689 [2024-11-19 10:57:29.070338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.689 [2024-11-19 10:57:29.070352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.689 [2024-11-19 10:57:29.070374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.689 [2024-11-19 10:57:29.070390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.689 [2024-11-19 10:57:29.070406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.689 [2024-11-19 10:57:29.070420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.689 [2024-11-19 10:57:29.070435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.689 [2024-11-19 10:57:29.070450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.689 [2024-11-19 10:57:29.070465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.689 [2024-11-19 10:57:29.070479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.689 [2024-11-19 10:57:29.070495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.689 [2024-11-19 10:57:29.070509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.689 [2024-11-19 10:57:29.070525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.689 [2024-11-19 10:57:29.070539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.689 [2024-11-19 10:57:29.070554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.689 [2024-11-19 10:57:29.070570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.689 [2024-11-19 10:57:29.070593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.689 [2024-11-19 10:57:29.070607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.689 [2024-11-19 10:57:29.070622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.689 [2024-11-19 10:57:29.070637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.689 [2024-11-19 10:57:29.070656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.070671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.070686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.070700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.070715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.070730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.070745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.070763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.070779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.070794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.070810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.070824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.070839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.070853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.070869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.070883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.070898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.070913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.070928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.070942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.070958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.070972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.070988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.071002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.071017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.071031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.071047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.071061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.071077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.071091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.071107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.071121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.071141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.071156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.071171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.071185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.071201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.071215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.071231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.071245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.071261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.071275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.071300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.071321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.071337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.071351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.071367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.071382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.071398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.071412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.071427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.071441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.071456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.071471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.071486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.071500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.071515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.071534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.071550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.071565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.071581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.071606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.071621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.690 [2024-11-19 10:57:29.071637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.690 [2024-11-19 10:57:29.071652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.691 [2024-11-19 10:57:29.071667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.691 [2024-11-19 10:57:29.071682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.691 [2024-11-19 10:57:29.071696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.691 [2024-11-19 10:57:29.071712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.691 [2024-11-19 10:57:29.071726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.691 [2024-11-19 10:57:29.071742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.691 [2024-11-19 10:57:29.071756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.691 [2024-11-19 10:57:29.071771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.691 [2024-11-19 10:57:29.071786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.691 [2024-11-19 10:57:29.071801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.691 [2024-11-19 10:57:29.071816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.691 [2024-11-19 10:57:29.071831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.691 [2024-11-19 10:57:29.071845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.691 [2024-11-19 10:57:29.071861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.691 [2024-11-19 10:57:29.071876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.691 [2024-11-19 10:57:29.071891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.691 [2024-11-19 10:57:29.071905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.691 [2024-11-19 10:57:29.071924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.691 [2024-11-19 10:57:29.071939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.691 [2024-11-19 10:57:29.071955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.691 [2024-11-19 10:57:29.071970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.691 [2024-11-19 10:57:29.071985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.691 [2024-11-19 10:57:29.072000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.691 [2024-11-19 10:57:29.072015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.691 [2024-11-19 10:57:29.072029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.691 [2024-11-19 10:57:29.072044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16aaa60 is same with the state(6) to be set 00:29:41.691 [2024-11-19 10:57:29.072174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.691 [2024-11-19 10:57:29.072197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.691 [2024-11-19 10:57:29.072213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.691 [2024-11-19 10:57:29.072227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.691 [2024-11-19 10:57:29.072241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.691 [2024-11-19 10:57:29.072254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.691 [2024-11-19 10:57:29.072268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.691 [2024-11-19 10:57:29.072281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.691 [2024-11-19 10:57:29.072301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1491a40 is same with the state(6) to be set 00:29:41.691 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.691 [2024-11-19 10:57:29.073484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:41.691 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:41.691 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.691 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:41.691 task offset: 73728 on job bdev=Nvme0n1 fails 00:29:41.691 00:29:41.691 Latency(us) 00:29:41.691 [2024-11-19T09:57:29.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.691 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:41.691 Job: Nvme0n1 ended in about 0.39 seconds with error 00:29:41.691 Verification LBA range: start 0x0 length 0x400 00:29:41.691 Nvme0n1 : 0.39 1485.45 92.84 165.05 0.00 37642.16 6553.60 34369.99 00:29:41.691 [2024-11-19T09:57:29.314Z] =================================================================================================================== 00:29:41.691 [2024-11-19T09:57:29.314Z] Total : 1485.45 92.84 165.05 0.00 37642.16 6553.60 34369.99 00:29:41.691 [2024-11-19 10:57:29.075576] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:41.691 [2024-11-19 10:57:29.075613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1491a40 (9): Bad file descriptor 00:29:41.691 [2024-11-19 10:57:29.076784] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:29:41.691 [2024-11-19 10:57:29.077007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:41.691 [2024-11-19 10:57:29.077036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.691 [2024-11-19 10:57:29.077060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:29:41.691 [2024-11-19 10:57:29.077075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:29:41.691 [2024-11-19 10:57:29.077089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.691 [2024-11-19 10:57:29.077101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1491a40 00:29:41.691 [2024-11-19 10:57:29.077136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1491a40 (9): Bad file descriptor 00:29:41.691 [2024-11-19 10:57:29.077161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:41.691 [2024-11-19 10:57:29.077177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:41.691 [2024-11-19 10:57:29.077192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:41.691 [2024-11-19 10:57:29.077207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:41.691 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.691 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:42.627 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1473629 00:29:42.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1473629) - No such process 00:29:42.627 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:42.627 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:42.627 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:42.627 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:42.627 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:42.627 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:42.627 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:42.627 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:42.627 { 00:29:42.627 "params": { 00:29:42.627 "name": "Nvme$subsystem", 00:29:42.627 "trtype": "$TEST_TRANSPORT", 00:29:42.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.627 "adrfam": "ipv4", 00:29:42.627 "trsvcid": "$NVMF_PORT", 00:29:42.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.627 "hdgst": ${hdgst:-false}, 00:29:42.627 "ddgst": ${ddgst:-false} 00:29:42.627 }, 00:29:42.627 "method": "bdev_nvme_attach_controller" 00:29:42.627 } 00:29:42.627 EOF 00:29:42.627 )") 00:29:42.627 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:42.627 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:42.627 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:42.627 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:42.627 "params": { 00:29:42.627 "name": "Nvme0", 00:29:42.627 "trtype": "tcp", 00:29:42.627 "traddr": "10.0.0.2", 00:29:42.627 "adrfam": "ipv4", 00:29:42.627 "trsvcid": "4420", 00:29:42.627 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:42.627 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:42.627 "hdgst": false, 00:29:42.627 "ddgst": false 00:29:42.627 }, 00:29:42.627 "method": "bdev_nvme_attach_controller" 00:29:42.627 }' 00:29:42.627 [2024-11-19 10:57:30.135979] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:29:42.627 [2024-11-19 10:57:30.136070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473858 ] 00:29:42.627 [2024-11-19 10:57:30.208033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.886 [2024-11-19 10:57:30.270335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.886 Running I/O for 1 seconds... 00:29:44.259 1664.00 IOPS, 104.00 MiB/s 00:29:44.259 Latency(us) 00:29:44.259 [2024-11-19T09:57:31.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.259 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.259 Verification LBA range: start 0x0 length 0x400 00:29:44.259 Nvme0n1 : 1.02 1693.81 105.86 0.00 0.00 37171.66 6165.24 33399.09 00:29:44.259 [2024-11-19T09:57:31.882Z] =================================================================================================================== 00:29:44.259 [2024-11-19T09:57:31.882Z] Total : 1693.81 105.86 0.00 0.00 37171.66 6165.24 33399.09 00:29:44.259 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:44.259 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:44.259 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:44.259 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:44.259 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:44.259 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:44.259 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:29:44.259 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:44.259 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:29:44.259 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:44.259 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:44.259 rmmod nvme_tcp 00:29:44.259 rmmod nvme_fabrics 00:29:44.259 rmmod nvme_keyring 00:29:44.260 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:44.260 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:29:44.260 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:29:44.260 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1473540 ']' 00:29:44.260 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1473540 00:29:44.260 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1473540 ']' 00:29:44.260 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1473540 00:29:44.260 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:29:44.260 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:44.260 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1473540 00:29:44.260 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:44.260 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:44.260 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1473540' 00:29:44.260 killing process with pid 1473540 00:29:44.260 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1473540 00:29:44.260 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1473540 00:29:44.518 [2024-11-19 10:57:32.057785] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:44.518 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:44.518 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:44.518 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:44.518 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:29:44.518 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:29:44.518 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:44.518 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:29:44.518 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:44.519 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:44.519 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.519 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.519 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.057 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:47.057 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:47.057 00:29:47.057 real 0m8.732s 00:29:47.057 user 0m16.932s 00:29:47.057 sys 0m3.768s 00:29:47.057 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:47.057 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:47.057 ************************************ 00:29:47.057 END TEST nvmf_host_management 00:29:47.057 ************************************ 00:29:47.057 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:47.057 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:47.057 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:47.057 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:47.057 ************************************ 00:29:47.057 START TEST nvmf_lvol 00:29:47.057 ************************************ 00:29:47.057 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:47.057 * Looking for test storage... 00:29:47.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:47.057 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:47.057 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:29:47.057 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:47.057 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:47.057 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:47.057 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:47.057 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:47.057 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:47.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.058 --rc genhtml_branch_coverage=1 00:29:47.058 --rc genhtml_function_coverage=1 00:29:47.058 --rc genhtml_legend=1 00:29:47.058 --rc geninfo_all_blocks=1 00:29:47.058 --rc geninfo_unexecuted_blocks=1 00:29:47.058 00:29:47.058 ' 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:47.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.058 --rc genhtml_branch_coverage=1 00:29:47.058 --rc genhtml_function_coverage=1 00:29:47.058 --rc genhtml_legend=1 00:29:47.058 --rc geninfo_all_blocks=1 00:29:47.058 --rc geninfo_unexecuted_blocks=1 00:29:47.058 00:29:47.058 ' 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:47.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.058 --rc genhtml_branch_coverage=1 00:29:47.058 --rc genhtml_function_coverage=1 00:29:47.058 --rc genhtml_legend=1 00:29:47.058 --rc geninfo_all_blocks=1 00:29:47.058 --rc geninfo_unexecuted_blocks=1 00:29:47.058 00:29:47.058 ' 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:47.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.058 --rc genhtml_branch_coverage=1 00:29:47.058 --rc genhtml_function_coverage=1 00:29:47.058 --rc genhtml_legend=1 00:29:47.058 --rc geninfo_all_blocks=1 00:29:47.058 --rc geninfo_unexecuted_blocks=1 00:29:47.058 00:29:47.058 ' 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:47.058 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:47.059 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:48.961 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:48.961 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.961 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:48.962 Found net devices under 0000:09:00.0: cvl_0_0 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:48.962 Found net devices under 0000:09:00.1: cvl_0_1 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.962 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:49.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:49.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:29:49.221 00:29:49.221 --- 10.0.0.2 ping statistics --- 00:29:49.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.221 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:49.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:49.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:29:49.221 00:29:49.221 --- 10.0.0.1 ping statistics --- 00:29:49.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.221 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1476062 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1476062 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1476062 ']' 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:49.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:49.221 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:49.221 [2024-11-19 10:57:36.757171] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:49.221 [2024-11-19 10:57:36.758202] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:29:49.221 [2024-11-19 10:57:36.758264] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:49.221 [2024-11-19 10:57:36.827927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:49.479 [2024-11-19 10:57:36.884256] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:49.479 [2024-11-19 10:57:36.884329] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:49.479 [2024-11-19 10:57:36.884344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:49.479 [2024-11-19 10:57:36.884354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:49.479 [2024-11-19 10:57:36.884364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:49.479 [2024-11-19 10:57:36.885834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.479 [2024-11-19 10:57:36.885903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:49.479 [2024-11-19 10:57:36.885906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.479 [2024-11-19 10:57:36.970000] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:49.479 [2024-11-19 10:57:36.970213] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:49.479 [2024-11-19 10:57:36.970217] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:49.479 [2024-11-19 10:57:36.970487] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:49.479 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:49.479 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:49.479 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:49.479 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:49.479 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:49.479 10:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:49.479 10:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:49.737 [2024-11-19 10:57:37.270668] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:49.737 10:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:49.995 10:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:49.995 10:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:50.561 10:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:50.561 10:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:50.561 10:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:51.127 10:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c601f6f0-8cc2-4119-8b3e-41b938b79faa 00:29:51.127 10:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c601f6f0-8cc2-4119-8b3e-41b938b79faa lvol 20 00:29:51.127 10:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=42e48210-e95d-40dc-9312-b272c3ae200c 00:29:51.127 10:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:51.694 10:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 42e48210-e95d-40dc-9312-b272c3ae200c 00:29:51.694 10:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:51.952 [2024-11-19 10:57:39.538789] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:51.952 10:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:52.518 10:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1476484 00:29:52.518 10:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:52.518 10:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:53.452 10:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 42e48210-e95d-40dc-9312-b272c3ae200c MY_SNAPSHOT 00:29:53.710 10:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1e29e1de-bdb3-4b7a-bc5a-df9ad4b2fef2 00:29:53.710 10:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 42e48210-e95d-40dc-9312-b272c3ae200c 30 00:29:53.968 10:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1e29e1de-bdb3-4b7a-bc5a-df9ad4b2fef2 MY_CLONE 00:29:54.225 10:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1b64cf7c-a448-4a0f-a38b-f0b86efa3b5c 00:29:54.225 10:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1b64cf7c-a448-4a0f-a38b-f0b86efa3b5c 00:29:54.791 10:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1476484 00:30:02.901 Initializing NVMe Controllers 00:30:02.901 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:02.901 Controller IO queue size 128, less than required. 00:30:02.901 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:02.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:02.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:02.902 Initialization complete. Launching workers. 00:30:02.902 ======================================================== 00:30:02.902 Latency(us) 00:30:02.902 Device Information : IOPS MiB/s Average min max 00:30:02.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10580.20 41.33 12102.85 5674.32 72599.54 00:30:02.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10425.70 40.73 12282.97 4753.98 71981.34 00:30:02.902 ======================================================== 00:30:02.902 Total : 21005.90 82.05 12192.25 4753.98 72599.54 00:30:02.902 00:30:02.902 10:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:03.160 10:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 42e48210-e95d-40dc-9312-b272c3ae200c 00:30:03.418 10:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c601f6f0-8cc2-4119-8b3e-41b938b79faa 00:30:03.676 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:03.676 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:03.676 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:03.676 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:03.676 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:03.676 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:03.676 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:03.676 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:03.676 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:03.676 rmmod nvme_tcp 00:30:03.676 rmmod nvme_fabrics 00:30:03.676 rmmod nvme_keyring 00:30:03.676 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:03.676 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:03.676 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:03.676 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1476062 ']' 00:30:03.676 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1476062 00:30:03.676 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1476062 ']' 00:30:03.676 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1476062 00:30:03.676 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:30:03.676 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:03.676 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1476062 00:30:03.934 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:03.934 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:03.934 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1476062' 00:30:03.934 killing process with pid 1476062 00:30:03.934 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1476062 00:30:03.934 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1476062 00:30:04.193 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:04.193 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:04.193 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:04.193 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:04.193 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:30:04.193 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:04.193 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:30:04.193 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:04.193 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:04.193 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.193 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.193 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.098 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:06.098 00:30:06.098 real 0m19.434s 00:30:06.098 user 0m57.083s 00:30:06.098 sys 0m7.738s 00:30:06.098 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:06.098 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:06.098 ************************************ 00:30:06.098 END TEST nvmf_lvol 00:30:06.098 ************************************ 00:30:06.098 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:06.098 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:06.098 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:06.098 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:06.098 ************************************ 00:30:06.098 START TEST nvmf_lvs_grow 00:30:06.098 ************************************ 00:30:06.099 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:06.356 * Looking for test storage... 00:30:06.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:06.356 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:06.356 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:06.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.357 --rc genhtml_branch_coverage=1 00:30:06.357 --rc genhtml_function_coverage=1 00:30:06.357 --rc genhtml_legend=1 00:30:06.357 --rc geninfo_all_blocks=1 00:30:06.357 --rc geninfo_unexecuted_blocks=1 00:30:06.357 00:30:06.357 ' 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:06.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.357 --rc genhtml_branch_coverage=1 00:30:06.357 --rc genhtml_function_coverage=1 00:30:06.357 --rc genhtml_legend=1 00:30:06.357 --rc geninfo_all_blocks=1 00:30:06.357 --rc geninfo_unexecuted_blocks=1 00:30:06.357 00:30:06.357 ' 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:06.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.357 --rc genhtml_branch_coverage=1 00:30:06.357 --rc genhtml_function_coverage=1 00:30:06.357 --rc genhtml_legend=1 00:30:06.357 --rc geninfo_all_blocks=1 00:30:06.357 --rc geninfo_unexecuted_blocks=1 00:30:06.357 00:30:06.357 ' 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:06.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.357 --rc genhtml_branch_coverage=1 00:30:06.357 --rc genhtml_function_coverage=1 00:30:06.357 --rc genhtml_legend=1 00:30:06.357 --rc geninfo_all_blocks=1 00:30:06.357 --rc geninfo_unexecuted_blocks=1 00:30:06.357 00:30:06.357 ' 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:06.357 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:30:06.358 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:08.260 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:08.260 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:08.260 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:08.260 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:08.260 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:08.260 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:08.260 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:08.260 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:08.260 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:08.260 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:08.260 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:08.260 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:08.260 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:08.260 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:08.260 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:08.260 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:08.261 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:08.261 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:08.261 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:08.261 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:08.261 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:08.261 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:08.261 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:08.261 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:08.261 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:08.261 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:08.261 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:08.261 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:08.261 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:08.261 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:08.261 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:08.520 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:08.520 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:08.520 Found net devices under 0000:09:00.0: cvl_0_0 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:08.520 Found net devices under 0000:09:00.1: cvl_0_1 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:08.520 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:08.521 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:08.521 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:08.521 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:08.521 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:08.521 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:08.521 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:08.521 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:08.521 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:08.521 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:08.521 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:08.521 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:08.521 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:08.521 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:08.521 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:08.521 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:08.521 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:08.521 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:08.521 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:08.521 10:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:08.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:08.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:30:08.521 00:30:08.521 --- 10.0.0.2 ping statistics --- 00:30:08.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.521 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:08.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:08.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:30:08.521 00:30:08.521 --- 10.0.0.1 ping statistics --- 00:30:08.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.521 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1479740 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1479740 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1479740 ']' 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:08.521 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:08.521 [2024-11-19 10:57:56.089575] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:08.521 [2024-11-19 10:57:56.090647] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:30:08.521 [2024-11-19 10:57:56.090704] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:08.779 [2024-11-19 10:57:56.162248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.779 [2024-11-19 10:57:56.218126] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:08.779 [2024-11-19 10:57:56.218178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:08.779 [2024-11-19 10:57:56.218207] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:08.779 [2024-11-19 10:57:56.218219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:08.780 [2024-11-19 10:57:56.218228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:08.780 [2024-11-19 10:57:56.218856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.780 [2024-11-19 10:57:56.304383] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:08.780 [2024-11-19 10:57:56.304690] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:08.780 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:08.780 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:30:08.780 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:08.780 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:08.780 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:08.780 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:08.780 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:09.039 [2024-11-19 10:57:56.611494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.039 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:09.039 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:09.039 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:09.039 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:09.039 ************************************ 00:30:09.039 START TEST lvs_grow_clean 00:30:09.039 ************************************ 00:30:09.039 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:30:09.039 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:09.039 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:09.039 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:09.039 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:09.039 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:09.039 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:09.039 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:09.039 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:09.297 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:09.556 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:09.556 10:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:09.815 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f56bfe58-2d4a-4166-8f85-edba743b1a8a 00:30:09.815 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f56bfe58-2d4a-4166-8f85-edba743b1a8a 00:30:09.815 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:10.073 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:10.073 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:10.073 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f56bfe58-2d4a-4166-8f85-edba743b1a8a lvol 150 00:30:10.331 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b9d31f6e-d8d5-4f0b-a25c-cdbfedb0a365 00:30:10.331 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:10.331 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:10.590 [2024-11-19 10:57:58.027419] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:10.590 [2024-11-19 10:57:58.027522] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:10.590 true 00:30:10.590 10:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f56bfe58-2d4a-4166-8f85-edba743b1a8a 00:30:10.590 10:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:10.847 10:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:10.847 10:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:11.105 10:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b9d31f6e-d8d5-4f0b-a25c-cdbfedb0a365 00:30:11.364 10:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:11.622 [2024-11-19 10:57:59.127703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.622 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:11.903 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1480175 00:30:11.903 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:11.903 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:11.903 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1480175 /var/tmp/bdevperf.sock 00:30:11.903 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1480175 ']' 00:30:11.903 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:11.903 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:11.903 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:11.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:11.903 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:11.903 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:11.903 [2024-11-19 10:57:59.500642] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:30:11.903 [2024-11-19 10:57:59.500726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480175 ] 00:30:12.160 [2024-11-19 10:57:59.568266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.160 [2024-11-19 10:57:59.628210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.160 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:12.160 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:30:12.160 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:12.724 Nvme0n1 00:30:12.724 10:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:12.980 [ 00:30:12.980 { 00:30:12.980 "name": "Nvme0n1", 00:30:12.980 "aliases": [ 00:30:12.980 "b9d31f6e-d8d5-4f0b-a25c-cdbfedb0a365" 00:30:12.980 ], 00:30:12.980 "product_name": "NVMe disk", 00:30:12.980 "block_size": 4096, 00:30:12.980 "num_blocks": 38912, 00:30:12.980 "uuid": "b9d31f6e-d8d5-4f0b-a25c-cdbfedb0a365", 00:30:12.980 "numa_id": 0, 00:30:12.980 "assigned_rate_limits": { 00:30:12.980 "rw_ios_per_sec": 0, 00:30:12.980 "rw_mbytes_per_sec": 0, 00:30:12.980 "r_mbytes_per_sec": 0, 00:30:12.980 "w_mbytes_per_sec": 0 00:30:12.980 }, 00:30:12.980 "claimed": false, 00:30:12.980 "zoned": false, 00:30:12.980 "supported_io_types": { 00:30:12.980 "read": true, 00:30:12.980 "write": true, 00:30:12.981 "unmap": true, 00:30:12.981 "flush": true, 00:30:12.981 "reset": true, 00:30:12.981 "nvme_admin": true, 00:30:12.981 "nvme_io": true, 00:30:12.981 "nvme_io_md": false, 00:30:12.981 "write_zeroes": true, 00:30:12.981 "zcopy": false, 00:30:12.981 "get_zone_info": false, 00:30:12.981 "zone_management": false, 00:30:12.981 "zone_append": false, 00:30:12.981 "compare": true, 00:30:12.981 "compare_and_write": true, 00:30:12.981 "abort": true, 00:30:12.981 "seek_hole": false, 00:30:12.981 "seek_data": false, 00:30:12.981 "copy": true, 00:30:12.981 "nvme_iov_md": false 00:30:12.981 }, 00:30:12.981 "memory_domains": [ 00:30:12.981 { 00:30:12.981 "dma_device_id": "system", 00:30:12.981 "dma_device_type": 1 00:30:12.981 } 00:30:12.981 ], 00:30:12.981 "driver_specific": { 00:30:12.981 "nvme": [ 00:30:12.981 { 00:30:12.981 "trid": { 00:30:12.981 "trtype": "TCP", 00:30:12.981 "adrfam": "IPv4", 00:30:12.981 "traddr": "10.0.0.2", 00:30:12.981 "trsvcid": "4420", 00:30:12.981 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:12.981 }, 00:30:12.981 "ctrlr_data": { 00:30:12.981 "cntlid": 1, 00:30:12.981 "vendor_id": "0x8086", 00:30:12.981 "model_number": "SPDK bdev Controller", 00:30:12.981 "serial_number": "SPDK0", 00:30:12.981 "firmware_revision": "25.01", 00:30:12.981 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:12.981 "oacs": { 00:30:12.981 "security": 0, 00:30:12.981 "format": 0, 00:30:12.981 "firmware": 0, 00:30:12.981 "ns_manage": 0 00:30:12.981 }, 00:30:12.981 "multi_ctrlr": true, 00:30:12.981 "ana_reporting": false 00:30:12.981 }, 00:30:12.981 "vs": { 00:30:12.981 "nvme_version": "1.3" 00:30:12.981 }, 00:30:12.981 "ns_data": { 00:30:12.981 "id": 1, 00:30:12.981 "can_share": true 00:30:12.981 } 00:30:12.981 } 00:30:12.981 ], 00:30:12.981 "mp_policy": "active_passive" 00:30:12.981 } 00:30:12.981 } 00:30:12.981 ] 00:30:12.981 10:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1480309 00:30:12.981 10:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:12.981 10:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:12.981 Running I/O for 10 seconds... 00:30:13.912 Latency(us) 00:30:13.912 [2024-11-19T09:58:01.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:13.912 Nvme0n1 : 1.00 14796.00 57.80 0.00 0.00 0.00 0.00 0.00 00:30:13.912 [2024-11-19T09:58:01.535Z] =================================================================================================================== 00:30:13.912 [2024-11-19T09:58:01.535Z] Total : 14796.00 57.80 0.00 0.00 0.00 0.00 0.00 00:30:13.912 00:30:14.846 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f56bfe58-2d4a-4166-8f85-edba743b1a8a 00:30:15.104 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:15.104 Nvme0n1 : 2.00 15003.00 58.61 0.00 0.00 0.00 0.00 0.00 00:30:15.104 [2024-11-19T09:58:02.727Z] =================================================================================================================== 00:30:15.104 [2024-11-19T09:58:02.727Z] Total : 15003.00 58.61 0.00 0.00 0.00 0.00 0.00 00:30:15.104 00:30:15.104 true 00:30:15.104 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f56bfe58-2d4a-4166-8f85-edba743b1a8a 00:30:15.104 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:15.361 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:15.361 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:15.361 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1480309 00:30:15.989 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:15.989 Nvme0n1 : 3.00 15124.33 59.08 0.00 0.00 0.00 0.00 0.00 00:30:15.989 [2024-11-19T09:58:03.612Z] =================================================================================================================== 00:30:15.989 [2024-11-19T09:58:03.612Z] Total : 15124.33 59.08 0.00 0.00 0.00 0.00 0.00 00:30:15.989 00:30:16.923 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:16.923 Nvme0n1 : 4.00 15216.75 59.44 0.00 0.00 0.00 0.00 0.00 00:30:16.923 [2024-11-19T09:58:04.546Z] =================================================================================================================== 00:30:16.923 [2024-11-19T09:58:04.546Z] Total : 15216.75 59.44 0.00 0.00 0.00 0.00 0.00 00:30:16.923 00:30:18.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:18.296 Nvme0n1 : 5.00 15272.20 59.66 0.00 0.00 0.00 0.00 0.00 00:30:18.296 [2024-11-19T09:58:05.919Z] =================================================================================================================== 00:30:18.296 [2024-11-19T09:58:05.919Z] Total : 15272.20 59.66 0.00 0.00 0.00 0.00 0.00 00:30:18.296 00:30:19.230 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:19.230 Nvme0n1 : 6.00 15309.17 59.80 0.00 0.00 0.00 0.00 0.00 00:30:19.230 [2024-11-19T09:58:06.853Z] =================================================================================================================== 00:30:19.230 [2024-11-19T09:58:06.853Z] Total : 15309.17 59.80 0.00 0.00 0.00 0.00 0.00 00:30:19.230 00:30:20.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:20.163 Nvme0n1 : 7.00 15371.86 60.05 0.00 0.00 0.00 0.00 0.00 00:30:20.163 [2024-11-19T09:58:07.786Z] =================================================================================================================== 00:30:20.163 [2024-11-19T09:58:07.786Z] Total : 15371.86 60.05 0.00 0.00 0.00 0.00 0.00 00:30:20.163 00:30:21.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:21.097 Nvme0n1 : 8.00 15411.00 60.20 0.00 0.00 0.00 0.00 0.00 00:30:21.097 [2024-11-19T09:58:08.720Z] =================================================================================================================== 00:30:21.097 [2024-11-19T09:58:08.720Z] Total : 15411.00 60.20 0.00 0.00 0.00 0.00 0.00 00:30:21.097 00:30:22.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:22.030 Nvme0n1 : 9.00 15445.11 60.33 0.00 0.00 0.00 0.00 0.00 00:30:22.030 [2024-11-19T09:58:09.653Z] =================================================================================================================== 00:30:22.030 [2024-11-19T09:58:09.653Z] Total : 15445.11 60.33 0.00 0.00 0.00 0.00 0.00 00:30:22.030 00:30:22.966 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:22.966 Nvme0n1 : 10.00 15453.40 60.36 0.00 0.00 0.00 0.00 0.00 00:30:22.966 [2024-11-19T09:58:10.589Z] =================================================================================================================== 00:30:22.966 [2024-11-19T09:58:10.589Z] Total : 15453.40 60.36 0.00 0.00 0.00 0.00 0.00 00:30:22.966 00:30:22.966 00:30:22.966 Latency(us) 00:30:22.966 [2024-11-19T09:58:10.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.966 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:22.966 Nvme0n1 : 10.01 15455.05 60.37 0.00 0.00 8277.46 3859.34 18544.26 00:30:22.966 [2024-11-19T09:58:10.589Z] =================================================================================================================== 00:30:22.966 [2024-11-19T09:58:10.589Z] Total : 15455.05 60.37 0.00 0.00 8277.46 3859.34 18544.26 00:30:22.966 { 00:30:22.966 "results": [ 00:30:22.966 { 00:30:22.966 "job": "Nvme0n1", 00:30:22.966 "core_mask": "0x2", 00:30:22.966 "workload": "randwrite", 00:30:22.966 "status": "finished", 00:30:22.966 "queue_depth": 128, 00:30:22.966 "io_size": 4096, 00:30:22.966 "runtime": 10.007214, 00:30:22.966 "iops": 15455.05072640597, 00:30:22.966 "mibps": 60.37129190002332, 00:30:22.966 "io_failed": 0, 00:30:22.966 "io_timeout": 0, 00:30:22.966 "avg_latency_us": 8277.46232297239, 00:30:22.966 "min_latency_us": 3859.342222222222, 00:30:22.966 "max_latency_us": 18544.26074074074 00:30:22.966 } 00:30:22.966 ], 00:30:22.966 "core_count": 1 00:30:22.966 } 00:30:22.966 10:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1480175 00:30:22.966 10:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1480175 ']' 00:30:22.966 10:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1480175 00:30:22.966 10:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:22.966 10:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:22.966 10:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1480175 00:30:22.966 10:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:22.966 10:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:22.966 10:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1480175' 00:30:22.966 killing process with pid 1480175 00:30:22.966 10:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1480175 00:30:22.966 Received shutdown signal, test time was about 10.000000 seconds 00:30:22.966 00:30:22.966 Latency(us) 00:30:22.966 [2024-11-19T09:58:10.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.966 [2024-11-19T09:58:10.589Z] =================================================================================================================== 00:30:22.966 [2024-11-19T09:58:10.589Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:22.966 10:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1480175 00:30:23.224 10:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:23.482 10:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:24.050 10:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f56bfe58-2d4a-4166-8f85-edba743b1a8a 00:30:24.050 10:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:24.308 10:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:24.308 10:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:24.308 10:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:24.567 [2024-11-19 10:58:11.935433] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:24.567 10:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f56bfe58-2d4a-4166-8f85-edba743b1a8a 00:30:24.567 10:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:24.567 10:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f56bfe58-2d4a-4166-8f85-edba743b1a8a 00:30:24.567 10:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:24.567 10:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:24.567 10:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:24.567 10:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:24.567 10:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:24.567 10:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:24.567 10:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:24.567 10:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:24.567 10:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f56bfe58-2d4a-4166-8f85-edba743b1a8a 00:30:24.825 request: 00:30:24.825 { 00:30:24.825 "uuid": "f56bfe58-2d4a-4166-8f85-edba743b1a8a", 00:30:24.825 "method": "bdev_lvol_get_lvstores", 00:30:24.825 "req_id": 1 00:30:24.825 } 00:30:24.825 Got JSON-RPC error response 00:30:24.825 response: 00:30:24.825 { 00:30:24.825 "code": -19, 00:30:24.825 "message": "No such device" 00:30:24.825 } 00:30:24.825 10:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:24.826 10:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:24.826 10:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:24.826 10:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:24.826 10:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:25.084 aio_bdev 00:30:25.084 10:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b9d31f6e-d8d5-4f0b-a25c-cdbfedb0a365 00:30:25.084 10:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=b9d31f6e-d8d5-4f0b-a25c-cdbfedb0a365 00:30:25.084 10:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:25.084 10:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:25.084 10:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:25.084 10:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:25.084 10:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:25.342 10:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b9d31f6e-d8d5-4f0b-a25c-cdbfedb0a365 -t 2000 00:30:25.600 [ 00:30:25.600 { 00:30:25.600 "name": "b9d31f6e-d8d5-4f0b-a25c-cdbfedb0a365", 00:30:25.600 "aliases": [ 00:30:25.600 "lvs/lvol" 00:30:25.600 ], 00:30:25.600 "product_name": "Logical Volume", 00:30:25.600 "block_size": 4096, 00:30:25.600 "num_blocks": 38912, 00:30:25.600 "uuid": "b9d31f6e-d8d5-4f0b-a25c-cdbfedb0a365", 00:30:25.600 "assigned_rate_limits": { 00:30:25.600 "rw_ios_per_sec": 0, 00:30:25.600 "rw_mbytes_per_sec": 0, 00:30:25.600 "r_mbytes_per_sec": 0, 00:30:25.600 "w_mbytes_per_sec": 0 00:30:25.600 }, 00:30:25.600 "claimed": false, 00:30:25.600 "zoned": false, 00:30:25.600 "supported_io_types": { 00:30:25.600 "read": true, 00:30:25.600 "write": true, 00:30:25.600 "unmap": true, 00:30:25.600 "flush": false, 00:30:25.600 "reset": true, 00:30:25.600 "nvme_admin": false, 00:30:25.600 "nvme_io": false, 00:30:25.600 "nvme_io_md": false, 00:30:25.600 "write_zeroes": true, 00:30:25.600 "zcopy": false, 00:30:25.600 "get_zone_info": false, 00:30:25.600 "zone_management": false, 00:30:25.600 "zone_append": false, 00:30:25.600 "compare": false, 00:30:25.600 "compare_and_write": false, 00:30:25.600 "abort": false, 00:30:25.600 "seek_hole": true, 00:30:25.600 "seek_data": true, 00:30:25.600 "copy": false, 00:30:25.600 "nvme_iov_md": false 00:30:25.600 }, 00:30:25.600 "driver_specific": { 00:30:25.600 "lvol": { 00:30:25.600 "lvol_store_uuid": "f56bfe58-2d4a-4166-8f85-edba743b1a8a", 00:30:25.600 "base_bdev": "aio_bdev", 00:30:25.600 "thin_provision": false, 00:30:25.600 "num_allocated_clusters": 38, 00:30:25.600 "snapshot": false, 00:30:25.600 "clone": false, 00:30:25.600 "esnap_clone": false 00:30:25.600 } 00:30:25.600 } 00:30:25.600 } 00:30:25.600 ] 00:30:25.600 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:25.600 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f56bfe58-2d4a-4166-8f85-edba743b1a8a 00:30:25.600 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:25.857 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:25.857 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f56bfe58-2d4a-4166-8f85-edba743b1a8a 00:30:25.857 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:26.115 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:26.115 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b9d31f6e-d8d5-4f0b-a25c-cdbfedb0a365 00:30:26.373 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f56bfe58-2d4a-4166-8f85-edba743b1a8a 00:30:26.632 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:26.890 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:27.148 00:30:27.148 real 0m17.865s 00:30:27.148 user 0m17.337s 00:30:27.148 sys 0m1.877s 00:30:27.148 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:27.148 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:27.148 ************************************ 00:30:27.148 END TEST lvs_grow_clean 00:30:27.148 ************************************ 00:30:27.148 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:27.148 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:27.148 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:27.148 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:27.148 ************************************ 00:30:27.148 START TEST lvs_grow_dirty 00:30:27.148 ************************************ 00:30:27.148 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:30:27.148 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:27.148 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:27.148 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:27.148 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:27.148 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:27.148 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:27.148 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:27.148 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:27.148 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:27.407 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:27.407 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:27.665 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=126dd66c-f9dd-4316-90e8-85c4e3936f0e 00:30:27.665 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 126dd66c-f9dd-4316-90e8-85c4e3936f0e 00:30:27.665 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:27.923 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:27.923 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:27.923 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 126dd66c-f9dd-4316-90e8-85c4e3936f0e lvol 150 00:30:28.181 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=169d74c8-06bb-4946-9598-e4a5fd366449 00:30:28.181 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:28.181 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:28.439 [2024-11-19 10:58:15.999395] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:28.439 [2024-11-19 10:58:15.999508] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:28.439 true 00:30:28.439 10:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 126dd66c-f9dd-4316-90e8-85c4e3936f0e 00:30:28.439 10:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:28.698 10:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:28.698 10:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:28.956 10:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 169d74c8-06bb-4946-9598-e4a5fd366449 00:30:29.524 10:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:29.782 [2024-11-19 10:58:17.164059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:29.782 10:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:30.041 10:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1482338 00:30:30.042 10:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:30.042 10:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:30.042 10:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1482338 /var/tmp/bdevperf.sock 00:30:30.042 10:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1482338 ']' 00:30:30.042 10:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:30.042 10:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:30.042 10:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:30.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:30.042 10:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:30.042 10:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:30.042 [2024-11-19 10:58:17.502198] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:30:30.042 [2024-11-19 10:58:17.502285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482338 ] 00:30:30.042 [2024-11-19 10:58:17.569873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.042 [2024-11-19 10:58:17.631399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.300 10:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:30.300 10:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:30.300 10:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:30.558 Nvme0n1 00:30:30.558 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:30.817 [ 00:30:30.817 { 00:30:30.817 "name": "Nvme0n1", 00:30:30.817 "aliases": [ 00:30:30.817 "169d74c8-06bb-4946-9598-e4a5fd366449" 00:30:30.817 ], 00:30:30.817 "product_name": "NVMe disk", 00:30:30.817 "block_size": 4096, 00:30:30.817 "num_blocks": 38912, 00:30:30.817 "uuid": "169d74c8-06bb-4946-9598-e4a5fd366449", 00:30:30.817 "numa_id": 0, 00:30:30.817 "assigned_rate_limits": { 00:30:30.817 "rw_ios_per_sec": 0, 00:30:30.817 "rw_mbytes_per_sec": 0, 00:30:30.817 "r_mbytes_per_sec": 0, 00:30:30.817 "w_mbytes_per_sec": 0 00:30:30.817 }, 00:30:30.817 "claimed": false, 00:30:30.817 "zoned": false, 00:30:30.817 "supported_io_types": { 00:30:30.817 "read": true, 00:30:30.817 "write": true, 00:30:30.817 "unmap": true, 00:30:30.817 "flush": true, 00:30:30.817 "reset": true, 00:30:30.817 "nvme_admin": true, 00:30:30.817 "nvme_io": true, 00:30:30.817 "nvme_io_md": false, 00:30:30.817 "write_zeroes": true, 00:30:30.817 "zcopy": false, 00:30:30.817 "get_zone_info": false, 00:30:30.817 "zone_management": false, 00:30:30.817 "zone_append": false, 00:30:30.817 "compare": true, 00:30:30.817 "compare_and_write": true, 00:30:30.817 "abort": true, 00:30:30.817 "seek_hole": false, 00:30:30.817 "seek_data": false, 00:30:30.817 "copy": true, 00:30:30.817 "nvme_iov_md": false 00:30:30.817 }, 00:30:30.817 "memory_domains": [ 00:30:30.817 { 00:30:30.817 "dma_device_id": "system", 00:30:30.817 "dma_device_type": 1 00:30:30.817 } 00:30:30.817 ], 00:30:30.817 "driver_specific": { 00:30:30.817 "nvme": [ 00:30:30.817 { 00:30:30.817 "trid": { 00:30:30.817 "trtype": "TCP", 00:30:30.817 "adrfam": "IPv4", 00:30:30.817 "traddr": "10.0.0.2", 00:30:30.817 "trsvcid": "4420", 00:30:30.817 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:30.817 }, 00:30:30.817 "ctrlr_data": { 00:30:30.817 "cntlid": 1, 00:30:30.817 "vendor_id": "0x8086", 00:30:30.817 "model_number": "SPDK bdev Controller", 00:30:30.817 "serial_number": "SPDK0", 00:30:30.817 "firmware_revision": "25.01", 00:30:30.817 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:30.817 "oacs": { 00:30:30.817 "security": 0, 00:30:30.817 "format": 0, 00:30:30.817 "firmware": 0, 00:30:30.817 "ns_manage": 0 00:30:30.817 }, 00:30:30.817 "multi_ctrlr": true, 00:30:30.817 "ana_reporting": false 00:30:30.817 }, 00:30:30.817 "vs": { 00:30:30.817 "nvme_version": "1.3" 00:30:30.817 }, 00:30:30.817 "ns_data": { 00:30:30.817 "id": 1, 00:30:30.817 "can_share": true 00:30:30.817 } 00:30:30.817 } 00:30:30.817 ], 00:30:30.817 "mp_policy": "active_passive" 00:30:30.817 } 00:30:30.817 } 00:30:30.817 ] 00:30:30.817 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1482364 00:30:30.817 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:30.817 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:31.076 Running I/O for 10 seconds... 00:30:32.012 Latency(us) 00:30:32.012 [2024-11-19T09:58:19.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:32.012 Nvme0n1 : 1.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:30:32.012 [2024-11-19T09:58:19.635Z] =================================================================================================================== 00:30:32.012 [2024-11-19T09:58:19.635Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:30:32.012 00:30:32.945 10:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 126dd66c-f9dd-4316-90e8-85c4e3936f0e 00:30:32.945 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:32.945 Nvme0n1 : 2.00 15049.50 58.79 0.00 0.00 0.00 0.00 0.00 00:30:32.945 [2024-11-19T09:58:20.568Z] =================================================================================================================== 00:30:32.945 [2024-11-19T09:58:20.568Z] Total : 15049.50 58.79 0.00 0.00 0.00 0.00 0.00 00:30:32.945 00:30:33.202 true 00:30:33.202 10:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 126dd66c-f9dd-4316-90e8-85c4e3936f0e 00:30:33.202 10:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:33.460 10:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:33.460 10:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:33.460 10:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1482364 00:30:34.025 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:34.025 Nvme0n1 : 3.00 15070.67 58.87 0.00 0.00 0.00 0.00 0.00 00:30:34.025 [2024-11-19T09:58:21.648Z] =================================================================================================================== 00:30:34.025 [2024-11-19T09:58:21.648Z] Total : 15070.67 58.87 0.00 0.00 0.00 0.00 0.00 00:30:34.025 00:30:34.959 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:34.959 Nvme0n1 : 4.00 15208.25 59.41 0.00 0.00 0.00 0.00 0.00 00:30:34.959 [2024-11-19T09:58:22.582Z] =================================================================================================================== 00:30:34.959 [2024-11-19T09:58:22.582Z] Total : 15208.25 59.41 0.00 0.00 0.00 0.00 0.00 00:30:34.959 00:30:36.332 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:36.332 Nvme0n1 : 5.00 15278.20 59.68 0.00 0.00 0.00 0.00 0.00 00:30:36.332 [2024-11-19T09:58:23.955Z] =================================================================================================================== 00:30:36.333 [2024-11-19T09:58:23.956Z] Total : 15278.20 59.68 0.00 0.00 0.00 0.00 0.00 00:30:36.333 00:30:37.267 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:37.267 Nvme0n1 : 6.00 15345.83 59.94 0.00 0.00 0.00 0.00 0.00 00:30:37.267 [2024-11-19T09:58:24.890Z] =================================================================================================================== 00:30:37.267 [2024-11-19T09:58:24.890Z] Total : 15345.83 59.94 0.00 0.00 0.00 0.00 0.00 00:30:37.267 00:30:38.199 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:38.199 Nvme0n1 : 7.00 15385.14 60.10 0.00 0.00 0.00 0.00 0.00 00:30:38.199 [2024-11-19T09:58:25.822Z] =================================================================================================================== 00:30:38.199 [2024-11-19T09:58:25.823Z] Total : 15385.14 60.10 0.00 0.00 0.00 0.00 0.00 00:30:38.200 00:30:39.206 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:39.206 Nvme0n1 : 8.00 15414.62 60.21 0.00 0.00 0.00 0.00 0.00 00:30:39.206 [2024-11-19T09:58:26.829Z] =================================================================================================================== 00:30:39.206 [2024-11-19T09:58:26.829Z] Total : 15414.62 60.21 0.00 0.00 0.00 0.00 0.00 00:30:39.206 00:30:40.141 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:40.141 Nvme0n1 : 9.00 15451.67 60.36 0.00 0.00 0.00 0.00 0.00 00:30:40.141 [2024-11-19T09:58:27.764Z] =================================================================================================================== 00:30:40.141 [2024-11-19T09:58:27.764Z] Total : 15451.67 60.36 0.00 0.00 0.00 0.00 0.00 00:30:40.141 00:30:41.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:41.075 Nvme0n1 : 10.00 15481.30 60.47 0.00 0.00 0.00 0.00 0.00 00:30:41.075 [2024-11-19T09:58:28.698Z] =================================================================================================================== 00:30:41.075 [2024-11-19T09:58:28.698Z] Total : 15481.30 60.47 0.00 0.00 0.00 0.00 0.00 00:30:41.075 00:30:41.075 00:30:41.075 Latency(us) 00:30:41.075 [2024-11-19T09:58:28.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:41.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:41.075 Nvme0n1 : 10.00 15487.68 60.50 0.00 0.00 8260.25 6262.33 18155.90 00:30:41.075 [2024-11-19T09:58:28.698Z] =================================================================================================================== 00:30:41.075 [2024-11-19T09:58:28.698Z] Total : 15487.68 60.50 0.00 0.00 8260.25 6262.33 18155.90 00:30:41.075 { 00:30:41.075 "results": [ 00:30:41.075 { 00:30:41.075 "job": "Nvme0n1", 00:30:41.075 "core_mask": "0x2", 00:30:41.075 "workload": "randwrite", 00:30:41.075 "status": "finished", 00:30:41.075 "queue_depth": 128, 00:30:41.075 "io_size": 4096, 00:30:41.075 "runtime": 10.004145, 00:30:41.075 "iops": 15487.680356492234, 00:30:41.075 "mibps": 60.49875139254779, 00:30:41.075 "io_failed": 0, 00:30:41.075 "io_timeout": 0, 00:30:41.075 "avg_latency_us": 8260.247544721324, 00:30:41.075 "min_latency_us": 6262.328888888889, 00:30:41.075 "max_latency_us": 18155.89925925926 00:30:41.075 } 00:30:41.075 ], 00:30:41.075 "core_count": 1 00:30:41.075 } 00:30:41.075 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1482338 00:30:41.075 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1482338 ']' 00:30:41.075 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1482338 00:30:41.075 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:30:41.075 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:41.075 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1482338 00:30:41.075 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:41.075 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:41.075 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1482338' 00:30:41.075 killing process with pid 1482338 00:30:41.075 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1482338 00:30:41.075 Received shutdown signal, test time was about 10.000000 seconds 00:30:41.075 00:30:41.075 Latency(us) 00:30:41.075 [2024-11-19T09:58:28.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:41.075 [2024-11-19T09:58:28.698Z] =================================================================================================================== 00:30:41.075 [2024-11-19T09:58:28.698Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:41.075 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1482338 00:30:41.333 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:41.592 10:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:42.160 10:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 126dd66c-f9dd-4316-90e8-85c4e3936f0e 00:30:42.160 10:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:42.160 10:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:42.160 10:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:42.160 10:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1479740 00:30:42.160 10:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1479740 00:30:42.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1479740 Killed "${NVMF_APP[@]}" "$@" 00:30:42.419 10:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:42.419 10:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:42.419 10:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:42.419 10:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:42.419 10:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:42.419 10:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1483674 00:30:42.419 10:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:42.419 10:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1483674 00:30:42.419 10:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1483674 ']' 00:30:42.419 10:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.419 10:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:42.419 10:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.419 10:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:42.419 10:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:42.419 [2024-11-19 10:58:29.853160] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:42.419 [2024-11-19 10:58:29.854250] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:30:42.420 [2024-11-19 10:58:29.854335] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.420 [2024-11-19 10:58:29.929678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.420 [2024-11-19 10:58:29.986090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:42.420 [2024-11-19 10:58:29.986134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:42.420 [2024-11-19 10:58:29.986161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:42.420 [2024-11-19 10:58:29.986171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:42.420 [2024-11-19 10:58:29.986180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:42.420 [2024-11-19 10:58:29.986807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.678 [2024-11-19 10:58:30.078035] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:42.678 [2024-11-19 10:58:30.078374] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:42.678 10:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:42.678 10:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:42.678 10:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:42.678 10:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:42.678 10:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:42.678 10:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:42.678 10:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:42.937 [2024-11-19 10:58:30.385617] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:42.937 [2024-11-19 10:58:30.385748] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:42.937 [2024-11-19 10:58:30.385796] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:42.938 10:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:42.938 10:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 169d74c8-06bb-4946-9598-e4a5fd366449 00:30:42.938 10:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=169d74c8-06bb-4946-9598-e4a5fd366449 00:30:42.938 10:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:42.938 10:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:42.938 10:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:42.938 10:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:42.938 10:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:43.196 10:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 169d74c8-06bb-4946-9598-e4a5fd366449 -t 2000 00:30:43.454 [ 00:30:43.454 { 00:30:43.454 "name": "169d74c8-06bb-4946-9598-e4a5fd366449", 00:30:43.454 "aliases": [ 00:30:43.454 "lvs/lvol" 00:30:43.454 ], 00:30:43.454 "product_name": "Logical Volume", 00:30:43.454 "block_size": 4096, 00:30:43.454 "num_blocks": 38912, 00:30:43.454 "uuid": "169d74c8-06bb-4946-9598-e4a5fd366449", 00:30:43.454 "assigned_rate_limits": { 00:30:43.454 "rw_ios_per_sec": 0, 00:30:43.454 "rw_mbytes_per_sec": 0, 00:30:43.454 "r_mbytes_per_sec": 0, 00:30:43.454 "w_mbytes_per_sec": 0 00:30:43.454 }, 00:30:43.454 "claimed": false, 00:30:43.454 "zoned": false, 00:30:43.454 "supported_io_types": { 00:30:43.454 "read": true, 00:30:43.454 "write": true, 00:30:43.454 "unmap": true, 00:30:43.454 "flush": false, 00:30:43.454 "reset": true, 00:30:43.454 "nvme_admin": false, 00:30:43.454 "nvme_io": false, 00:30:43.454 "nvme_io_md": false, 00:30:43.454 "write_zeroes": true, 00:30:43.454 "zcopy": false, 00:30:43.454 "get_zone_info": false, 00:30:43.454 "zone_management": false, 00:30:43.454 "zone_append": false, 00:30:43.454 "compare": false, 00:30:43.454 "compare_and_write": false, 00:30:43.454 "abort": false, 00:30:43.454 "seek_hole": true, 00:30:43.454 "seek_data": true, 00:30:43.454 "copy": false, 00:30:43.454 "nvme_iov_md": false 00:30:43.455 }, 00:30:43.455 "driver_specific": { 00:30:43.455 "lvol": { 00:30:43.455 "lvol_store_uuid": "126dd66c-f9dd-4316-90e8-85c4e3936f0e", 00:30:43.455 "base_bdev": "aio_bdev", 00:30:43.455 "thin_provision": false, 00:30:43.455 "num_allocated_clusters": 38, 00:30:43.455 "snapshot": false, 00:30:43.455 "clone": false, 00:30:43.455 "esnap_clone": false 00:30:43.455 } 00:30:43.455 } 00:30:43.455 } 00:30:43.455 ] 00:30:43.455 10:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:43.455 10:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 126dd66c-f9dd-4316-90e8-85c4e3936f0e 00:30:43.455 10:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:43.713 10:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:43.713 10:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 126dd66c-f9dd-4316-90e8-85c4e3936f0e 00:30:43.713 10:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:43.971 10:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:43.971 10:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:44.229 [2024-11-19 10:58:31.783334] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:44.229 10:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 126dd66c-f9dd-4316-90e8-85c4e3936f0e 00:30:44.229 10:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:30:44.229 10:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 126dd66c-f9dd-4316-90e8-85c4e3936f0e 00:30:44.229 10:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:44.229 10:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:44.229 10:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:44.229 10:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:44.229 10:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:44.229 10:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:44.229 10:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:44.229 10:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:44.229 10:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 126dd66c-f9dd-4316-90e8-85c4e3936f0e 00:30:44.488 request: 00:30:44.488 { 00:30:44.488 "uuid": "126dd66c-f9dd-4316-90e8-85c4e3936f0e", 00:30:44.488 "method": "bdev_lvol_get_lvstores", 00:30:44.488 "req_id": 1 00:30:44.488 } 00:30:44.488 Got JSON-RPC error response 00:30:44.488 response: 00:30:44.488 { 00:30:44.488 "code": -19, 00:30:44.488 "message": "No such device" 00:30:44.488 } 00:30:44.488 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:30:44.488 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:44.488 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:44.488 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:44.488 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:44.747 aio_bdev 00:30:44.747 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 169d74c8-06bb-4946-9598-e4a5fd366449 00:30:44.747 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=169d74c8-06bb-4946-9598-e4a5fd366449 00:30:44.747 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:44.747 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:44.747 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:44.747 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:44.747 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:45.315 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 169d74c8-06bb-4946-9598-e4a5fd366449 -t 2000 00:30:45.315 [ 00:30:45.315 { 00:30:45.315 "name": "169d74c8-06bb-4946-9598-e4a5fd366449", 00:30:45.315 "aliases": [ 00:30:45.315 "lvs/lvol" 00:30:45.315 ], 00:30:45.315 "product_name": "Logical Volume", 00:30:45.315 "block_size": 4096, 00:30:45.315 "num_blocks": 38912, 00:30:45.315 "uuid": "169d74c8-06bb-4946-9598-e4a5fd366449", 00:30:45.315 "assigned_rate_limits": { 00:30:45.315 "rw_ios_per_sec": 0, 00:30:45.315 "rw_mbytes_per_sec": 0, 00:30:45.315 "r_mbytes_per_sec": 0, 00:30:45.315 "w_mbytes_per_sec": 0 00:30:45.315 }, 00:30:45.315 "claimed": false, 00:30:45.315 "zoned": false, 00:30:45.315 "supported_io_types": { 00:30:45.315 "read": true, 00:30:45.315 "write": true, 00:30:45.315 "unmap": true, 00:30:45.315 "flush": false, 00:30:45.315 "reset": true, 00:30:45.315 "nvme_admin": false, 00:30:45.315 "nvme_io": false, 00:30:45.315 "nvme_io_md": false, 00:30:45.315 "write_zeroes": true, 00:30:45.315 "zcopy": false, 00:30:45.315 "get_zone_info": false, 00:30:45.315 "zone_management": false, 00:30:45.315 "zone_append": false, 00:30:45.315 "compare": false, 00:30:45.315 "compare_and_write": false, 00:30:45.315 "abort": false, 00:30:45.315 "seek_hole": true, 00:30:45.315 "seek_data": true, 00:30:45.315 "copy": false, 00:30:45.315 "nvme_iov_md": false 00:30:45.315 }, 00:30:45.315 "driver_specific": { 00:30:45.315 "lvol": { 00:30:45.315 "lvol_store_uuid": "126dd66c-f9dd-4316-90e8-85c4e3936f0e", 00:30:45.315 "base_bdev": "aio_bdev", 00:30:45.315 "thin_provision": false, 00:30:45.315 "num_allocated_clusters": 38, 00:30:45.315 "snapshot": false, 00:30:45.315 "clone": false, 00:30:45.315 "esnap_clone": false 00:30:45.315 } 00:30:45.315 } 00:30:45.315 } 00:30:45.315 ] 00:30:45.315 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:45.315 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 126dd66c-f9dd-4316-90e8-85c4e3936f0e 00:30:45.315 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:45.882 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:45.882 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 126dd66c-f9dd-4316-90e8-85c4e3936f0e 00:30:45.882 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:45.882 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:45.882 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 169d74c8-06bb-4946-9598-e4a5fd366449 00:30:46.168 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 126dd66c-f9dd-4316-90e8-85c4e3936f0e 00:30:46.450 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:46.707 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:46.965 00:30:46.965 real 0m19.761s 00:30:46.965 user 0m36.711s 00:30:46.965 sys 0m4.830s 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:46.965 ************************************ 00:30:46.965 END TEST lvs_grow_dirty 00:30:46.965 ************************************ 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:46.965 nvmf_trace.0 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:46.965 rmmod nvme_tcp 00:30:46.965 rmmod nvme_fabrics 00:30:46.965 rmmod nvme_keyring 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1483674 ']' 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1483674 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1483674 ']' 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1483674 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1483674 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:46.965 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:46.966 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1483674' 00:30:46.966 killing process with pid 1483674 00:30:46.966 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1483674 00:30:46.966 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1483674 00:30:47.223 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:47.223 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:47.223 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:47.223 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:47.223 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:30:47.223 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:47.223 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:30:47.223 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:47.223 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:47.223 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.223 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:47.223 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.757 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:49.758 00:30:49.758 real 0m43.074s 00:30:49.758 user 0m55.831s 00:30:49.758 sys 0m8.675s 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:49.758 ************************************ 00:30:49.758 END TEST nvmf_lvs_grow 00:30:49.758 ************************************ 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:49.758 ************************************ 00:30:49.758 START TEST nvmf_bdev_io_wait 00:30:49.758 ************************************ 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:49.758 * Looking for test storage... 00:30:49.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:49.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.758 --rc genhtml_branch_coverage=1 00:30:49.758 --rc genhtml_function_coverage=1 00:30:49.758 --rc genhtml_legend=1 00:30:49.758 --rc geninfo_all_blocks=1 00:30:49.758 --rc geninfo_unexecuted_blocks=1 00:30:49.758 00:30:49.758 ' 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:49.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.758 --rc genhtml_branch_coverage=1 00:30:49.758 --rc genhtml_function_coverage=1 00:30:49.758 --rc genhtml_legend=1 00:30:49.758 --rc geninfo_all_blocks=1 00:30:49.758 --rc geninfo_unexecuted_blocks=1 00:30:49.758 00:30:49.758 ' 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:49.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.758 --rc genhtml_branch_coverage=1 00:30:49.758 --rc genhtml_function_coverage=1 00:30:49.758 --rc genhtml_legend=1 00:30:49.758 --rc geninfo_all_blocks=1 00:30:49.758 --rc geninfo_unexecuted_blocks=1 00:30:49.758 00:30:49.758 ' 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:49.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.758 --rc genhtml_branch_coverage=1 00:30:49.758 --rc genhtml_function_coverage=1 00:30:49.758 --rc genhtml_legend=1 00:30:49.758 --rc geninfo_all_blocks=1 00:30:49.758 --rc geninfo_unexecuted_blocks=1 00:30:49.758 00:30:49.758 ' 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:49.758 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:49.759 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:51.677 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:51.677 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:51.677 Found net devices under 0000:09:00.0: cvl_0_0 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:51.677 Found net devices under 0000:09:00.1: cvl_0_1 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:51.677 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:51.678 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:51.678 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:51.678 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:51.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:51.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:30:51.678 00:30:51.678 --- 10.0.0.2 ping statistics --- 00:30:51.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.678 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:30:51.678 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:51.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:51.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:30:51.938 00:30:51.938 --- 10.0.0.1 ping statistics --- 00:30:51.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.938 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:30:51.938 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:51.938 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:51.938 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:51.938 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:51.938 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:51.938 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:51.938 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:51.938 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:51.938 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:51.938 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:51.938 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:51.938 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:51.938 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:51.938 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1486326 00:30:51.938 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1486326 00:30:51.938 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:51.938 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1486326 ']' 00:30:51.938 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.938 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:51.938 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.938 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:51.938 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:51.938 [2024-11-19 10:58:39.386844] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:51.938 [2024-11-19 10:58:39.387923] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:30:51.938 [2024-11-19 10:58:39.388000] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.938 [2024-11-19 10:58:39.458267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:51.938 [2024-11-19 10:58:39.515239] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:51.938 [2024-11-19 10:58:39.515314] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:51.938 [2024-11-19 10:58:39.515330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:51.938 [2024-11-19 10:58:39.515341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:51.938 [2024-11-19 10:58:39.515364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:51.938 [2024-11-19 10:58:39.516810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.938 [2024-11-19 10:58:39.516873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:51.938 [2024-11-19 10:58:39.516944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:51.938 [2024-11-19 10:58:39.516947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.938 [2024-11-19 10:58:39.517430] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:52.196 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:52.196 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:52.196 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:52.196 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:52.196 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:52.197 [2024-11-19 10:58:39.698512] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:52.197 [2024-11-19 10:58:39.698708] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:52.197 [2024-11-19 10:58:39.699682] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:52.197 [2024-11-19 10:58:39.700551] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:52.197 [2024-11-19 10:58:39.705701] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:52.197 Malloc0 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:52.197 [2024-11-19 10:58:39.765812] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1486348 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1486350 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:52.197 { 00:30:52.197 "params": { 00:30:52.197 "name": "Nvme$subsystem", 00:30:52.197 "trtype": "$TEST_TRANSPORT", 00:30:52.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:52.197 "adrfam": "ipv4", 00:30:52.197 "trsvcid": "$NVMF_PORT", 00:30:52.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:52.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:52.197 "hdgst": ${hdgst:-false}, 00:30:52.197 "ddgst": ${ddgst:-false} 00:30:52.197 }, 00:30:52.197 "method": "bdev_nvme_attach_controller" 00:30:52.197 } 00:30:52.197 EOF 00:30:52.197 )") 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1486352 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:52.197 { 00:30:52.197 "params": { 00:30:52.197 "name": "Nvme$subsystem", 00:30:52.197 "trtype": "$TEST_TRANSPORT", 00:30:52.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:52.197 "adrfam": "ipv4", 00:30:52.197 "trsvcid": "$NVMF_PORT", 00:30:52.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:52.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:52.197 "hdgst": ${hdgst:-false}, 00:30:52.197 "ddgst": ${ddgst:-false} 00:30:52.197 }, 00:30:52.197 "method": "bdev_nvme_attach_controller" 00:30:52.197 } 00:30:52.197 EOF 00:30:52.197 )") 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1486355 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:52.197 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:52.197 { 00:30:52.197 "params": { 00:30:52.197 "name": "Nvme$subsystem", 00:30:52.197 "trtype": "$TEST_TRANSPORT", 00:30:52.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:52.197 "adrfam": "ipv4", 00:30:52.197 "trsvcid": "$NVMF_PORT", 00:30:52.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:52.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:52.197 "hdgst": ${hdgst:-false}, 00:30:52.197 "ddgst": ${ddgst:-false} 00:30:52.197 }, 00:30:52.197 "method": "bdev_nvme_attach_controller" 00:30:52.197 } 00:30:52.197 EOF 00:30:52.198 )") 00:30:52.198 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:52.198 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:52.198 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:52.198 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:52.198 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:52.198 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:52.198 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:52.198 { 00:30:52.198 "params": { 00:30:52.198 "name": "Nvme$subsystem", 00:30:52.198 "trtype": "$TEST_TRANSPORT", 00:30:52.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:52.198 "adrfam": "ipv4", 00:30:52.198 "trsvcid": "$NVMF_PORT", 00:30:52.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:52.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:52.198 "hdgst": ${hdgst:-false}, 00:30:52.198 "ddgst": ${ddgst:-false} 00:30:52.198 }, 00:30:52.198 "method": "bdev_nvme_attach_controller" 00:30:52.198 } 00:30:52.198 EOF 00:30:52.198 )") 00:30:52.198 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:52.198 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:52.198 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1486348 00:30:52.198 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:52.198 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:52.198 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:52.198 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:52.198 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:52.198 "params": { 00:30:52.198 "name": "Nvme1", 00:30:52.198 "trtype": "tcp", 00:30:52.198 "traddr": "10.0.0.2", 00:30:52.198 "adrfam": "ipv4", 00:30:52.198 "trsvcid": "4420", 00:30:52.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:52.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:52.198 "hdgst": false, 00:30:52.198 "ddgst": false 00:30:52.198 }, 00:30:52.198 "method": "bdev_nvme_attach_controller" 00:30:52.198 }' 00:30:52.198 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:52.198 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:52.198 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:52.198 "params": { 00:30:52.198 "name": "Nvme1", 00:30:52.198 "trtype": "tcp", 00:30:52.198 "traddr": "10.0.0.2", 00:30:52.198 "adrfam": "ipv4", 00:30:52.198 "trsvcid": "4420", 00:30:52.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:52.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:52.198 "hdgst": false, 00:30:52.198 "ddgst": false 00:30:52.198 }, 00:30:52.198 "method": "bdev_nvme_attach_controller" 00:30:52.198 }' 00:30:52.198 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:52.198 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:52.198 "params": { 00:30:52.198 "name": "Nvme1", 00:30:52.198 "trtype": "tcp", 00:30:52.198 "traddr": "10.0.0.2", 00:30:52.198 "adrfam": "ipv4", 00:30:52.198 "trsvcid": "4420", 00:30:52.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:52.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:52.198 "hdgst": false, 00:30:52.198 "ddgst": false 00:30:52.198 }, 00:30:52.198 "method": "bdev_nvme_attach_controller" 00:30:52.198 }' 00:30:52.198 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:52.198 10:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:52.198 "params": { 00:30:52.198 "name": "Nvme1", 00:30:52.198 "trtype": "tcp", 00:30:52.198 "traddr": "10.0.0.2", 00:30:52.198 "adrfam": "ipv4", 00:30:52.198 "trsvcid": "4420", 00:30:52.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:52.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:52.198 "hdgst": false, 00:30:52.198 "ddgst": false 00:30:52.198 }, 00:30:52.198 "method": "bdev_nvme_attach_controller" 00:30:52.198 }' 00:30:52.457 [2024-11-19 10:58:39.818222] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:30:52.457 [2024-11-19 10:58:39.818221] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:30:52.457 [2024-11-19 10:58:39.818221] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:30:52.457 [2024-11-19 10:58:39.818222] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:30:52.457 [2024-11-19 10:58:39.818334] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 10:58:39.818334] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 10:58:39.818335] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 10:58:39.818335] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:52.457 --proc-type=auto ] 00:30:52.457 --proc-type=auto ] 00:30:52.457 --proc-type=auto ] 00:30:52.457 [2024-11-19 10:58:39.998317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.457 [2024-11-19 10:58:40.056075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:52.715 [2024-11-19 10:58:40.105394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.715 [2024-11-19 10:58:40.162877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:52.715 [2024-11-19 10:58:40.212071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.715 [2024-11-19 10:58:40.271646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:52.715 [2024-11-19 10:58:40.325571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.973 [2024-11-19 10:58:40.380569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:52.973 Running I/O for 1 seconds... 00:30:52.973 Running I/O for 1 seconds... 00:30:52.973 Running I/O for 1 seconds... 00:30:52.973 Running I/O for 1 seconds... 00:30:53.906 6388.00 IOPS, 24.95 MiB/s 00:30:53.906 Latency(us) 00:30:53.906 [2024-11-19T09:58:41.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.906 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:53.906 Nvme1n1 : 1.02 6366.99 24.87 0.00 0.00 19848.64 4830.25 29515.47 00:30:53.906 [2024-11-19T09:58:41.529Z] =================================================================================================================== 00:30:53.906 [2024-11-19T09:58:41.529Z] Total : 6366.99 24.87 0.00 0.00 19848.64 4830.25 29515.47 00:30:53.906 9546.00 IOPS, 37.29 MiB/s 00:30:53.906 Latency(us) 00:30:53.906 [2024-11-19T09:58:41.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.906 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:53.906 Nvme1n1 : 1.01 9589.59 37.46 0.00 0.00 13285.51 4636.07 18252.99 00:30:53.906 [2024-11-19T09:58:41.529Z] =================================================================================================================== 00:30:53.906 [2024-11-19T09:58:41.529Z] Total : 9589.59 37.46 0.00 0.00 13285.51 4636.07 18252.99 00:30:54.164 6580.00 IOPS, 25.70 MiB/s 00:30:54.164 Latency(us) 00:30:54.164 [2024-11-19T09:58:41.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.164 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:54.164 Nvme1n1 : 1.01 6706.94 26.20 0.00 0.00 19035.17 3762.25 38836.15 00:30:54.164 [2024-11-19T09:58:41.787Z] =================================================================================================================== 00:30:54.164 [2024-11-19T09:58:41.787Z] Total : 6706.94 26.20 0.00 0.00 19035.17 3762.25 38836.15 00:30:54.164 179392.00 IOPS, 700.75 MiB/s 00:30:54.164 Latency(us) 00:30:54.164 [2024-11-19T09:58:41.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.164 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:54.164 Nvme1n1 : 1.00 179050.14 699.41 0.00 0.00 711.06 297.34 1881.13 00:30:54.164 [2024-11-19T09:58:41.787Z] =================================================================================================================== 00:30:54.164 [2024-11-19T09:58:41.787Z] Total : 179050.14 699.41 0.00 0.00 711.06 297.34 1881.13 00:30:54.164 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1486350 00:30:54.164 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1486352 00:30:54.164 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1486355 00:30:54.164 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:54.164 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.164 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:54.164 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.164 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:54.164 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:54.164 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:54.164 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:54.164 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:54.164 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:54.164 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:54.164 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:54.164 rmmod nvme_tcp 00:30:54.422 rmmod nvme_fabrics 00:30:54.422 rmmod nvme_keyring 00:30:54.422 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:54.422 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:54.422 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:54.422 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1486326 ']' 00:30:54.422 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1486326 00:30:54.422 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1486326 ']' 00:30:54.422 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1486326 00:30:54.422 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:54.422 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:54.422 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1486326 00:30:54.422 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:54.422 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:54.422 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1486326' 00:30:54.422 killing process with pid 1486326 00:30:54.422 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1486326 00:30:54.422 10:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1486326 00:30:54.682 10:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:54.682 10:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:54.682 10:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:54.682 10:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:54.682 10:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:54.682 10:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:54.682 10:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:54.682 10:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:54.682 10:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:54.682 10:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.682 10:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:54.682 10:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.588 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:56.588 00:30:56.588 real 0m7.321s 00:30:56.588 user 0m14.663s 00:30:56.588 sys 0m3.883s 00:30:56.588 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:56.588 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:56.588 ************************************ 00:30:56.588 END TEST nvmf_bdev_io_wait 00:30:56.588 ************************************ 00:30:56.588 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:56.588 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:56.588 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:56.588 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:56.588 ************************************ 00:30:56.588 START TEST nvmf_queue_depth 00:30:56.588 ************************************ 00:30:56.588 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:56.846 * Looking for test storage... 00:30:56.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:56.846 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:56.846 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:30:56.846 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:56.846 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:56.846 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:56.846 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:56.846 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:56.846 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:56.846 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:56.846 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:56.846 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:56.846 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:56.846 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:56.846 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:56.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.847 --rc genhtml_branch_coverage=1 00:30:56.847 --rc genhtml_function_coverage=1 00:30:56.847 --rc genhtml_legend=1 00:30:56.847 --rc geninfo_all_blocks=1 00:30:56.847 --rc geninfo_unexecuted_blocks=1 00:30:56.847 00:30:56.847 ' 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:56.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.847 --rc genhtml_branch_coverage=1 00:30:56.847 --rc genhtml_function_coverage=1 00:30:56.847 --rc genhtml_legend=1 00:30:56.847 --rc geninfo_all_blocks=1 00:30:56.847 --rc geninfo_unexecuted_blocks=1 00:30:56.847 00:30:56.847 ' 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:56.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.847 --rc genhtml_branch_coverage=1 00:30:56.847 --rc genhtml_function_coverage=1 00:30:56.847 --rc genhtml_legend=1 00:30:56.847 --rc geninfo_all_blocks=1 00:30:56.847 --rc geninfo_unexecuted_blocks=1 00:30:56.847 00:30:56.847 ' 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:56.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.847 --rc genhtml_branch_coverage=1 00:30:56.847 --rc genhtml_function_coverage=1 00:30:56.847 --rc genhtml_legend=1 00:30:56.847 --rc geninfo_all_blocks=1 00:30:56.847 --rc geninfo_unexecuted_blocks=1 00:30:56.847 00:30:56.847 ' 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:56.847 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:56.848 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:59.381 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:59.381 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:59.381 Found net devices under 0000:09:00.0: cvl_0_0 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:59.381 Found net devices under 0000:09:00.1: cvl_0_1 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.381 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:59.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:59.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:30:59.382 00:30:59.382 --- 10.0.0.2 ping statistics --- 00:30:59.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.382 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:59.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:59.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:30:59.382 00:30:59.382 --- 10.0.0.1 ping statistics --- 00:30:59.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.382 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1488575 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1488575 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1488575 ']' 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:59.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:59.382 [2024-11-19 10:58:46.694651] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:59.382 [2024-11-19 10:58:46.695717] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:30:59.382 [2024-11-19 10:58:46.695782] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:59.382 [2024-11-19 10:58:46.770744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:59.382 [2024-11-19 10:58:46.826969] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:59.382 [2024-11-19 10:58:46.827018] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:59.382 [2024-11-19 10:58:46.827047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:59.382 [2024-11-19 10:58:46.827058] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:59.382 [2024-11-19 10:58:46.827068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:59.382 [2024-11-19 10:58:46.827670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.382 [2024-11-19 10:58:46.911814] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:59.382 [2024-11-19 10:58:46.912128] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:59.382 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.383 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:59.383 [2024-11-19 10:58:46.968266] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:59.383 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.383 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:59.383 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.383 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:59.641 Malloc0 00:30:59.641 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.641 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:59.641 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.641 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:59.641 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.641 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:59.641 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.641 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:59.641 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.641 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:59.641 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.641 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:59.641 [2024-11-19 10:58:47.028386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:59.641 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.641 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1488600 00:30:59.641 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:59.641 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:59.641 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1488600 /var/tmp/bdevperf.sock 00:30:59.641 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1488600 ']' 00:30:59.641 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:59.641 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:59.641 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:59.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:59.641 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:59.641 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:59.641 [2024-11-19 10:58:47.075080] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:30:59.641 [2024-11-19 10:58:47.075163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488600 ] 00:30:59.641 [2024-11-19 10:58:47.142731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:59.641 [2024-11-19 10:58:47.201609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.899 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:59.899 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:59.899 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:59.899 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.899 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:59.899 NVMe0n1 00:30:59.899 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.899 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:00.156 Running I/O for 10 seconds... 00:31:02.023 8192.00 IOPS, 32.00 MiB/s [2024-11-19T09:58:51.019Z] 8225.50 IOPS, 32.13 MiB/s [2024-11-19T09:58:51.954Z] 8517.67 IOPS, 33.27 MiB/s [2024-11-19T09:58:52.887Z] 8452.75 IOPS, 33.02 MiB/s [2024-11-19T09:58:53.820Z] 8576.40 IOPS, 33.50 MiB/s [2024-11-19T09:58:54.753Z] 8543.33 IOPS, 33.37 MiB/s [2024-11-19T09:58:55.687Z] 8630.14 IOPS, 33.71 MiB/s [2024-11-19T09:58:56.622Z] 8601.75 IOPS, 33.60 MiB/s [2024-11-19T09:58:57.995Z] 8646.11 IOPS, 33.77 MiB/s [2024-11-19T09:58:57.995Z] 8645.20 IOPS, 33.77 MiB/s 00:31:10.372 Latency(us) 00:31:10.372 [2024-11-19T09:58:57.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:10.372 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:10.372 Verification LBA range: start 0x0 length 0x4000 00:31:10.372 NVMe0n1 : 10.08 8679.12 33.90 0.00 0.00 117411.47 20680.25 71070.15 00:31:10.372 [2024-11-19T09:58:57.995Z] =================================================================================================================== 00:31:10.372 [2024-11-19T09:58:57.995Z] Total : 8679.12 33.90 0.00 0.00 117411.47 20680.25 71070.15 00:31:10.372 { 00:31:10.372 "results": [ 00:31:10.372 { 00:31:10.372 "job": "NVMe0n1", 00:31:10.372 "core_mask": "0x1", 00:31:10.372 "workload": "verify", 00:31:10.372 "status": "finished", 00:31:10.372 "verify_range": { 00:31:10.372 "start": 0, 00:31:10.372 "length": 16384 00:31:10.372 }, 00:31:10.372 "queue_depth": 1024, 00:31:10.372 "io_size": 4096, 00:31:10.372 "runtime": 10.083512, 00:31:10.372 "iops": 8679.118941892468, 00:31:10.372 "mibps": 33.90280836676745, 00:31:10.372 "io_failed": 0, 00:31:10.372 "io_timeout": 0, 00:31:10.372 "avg_latency_us": 117411.47164407608, 00:31:10.372 "min_latency_us": 20680.248888888887, 00:31:10.372 "max_latency_us": 71070.15111111112 00:31:10.372 } 00:31:10.372 ], 00:31:10.372 "core_count": 1 00:31:10.372 } 00:31:10.372 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1488600 00:31:10.372 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1488600 ']' 00:31:10.372 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1488600 00:31:10.372 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:10.372 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:10.372 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1488600 00:31:10.372 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:10.372 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:10.372 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1488600' 00:31:10.372 killing process with pid 1488600 00:31:10.372 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1488600 00:31:10.373 Received shutdown signal, test time was about 10.000000 seconds 00:31:10.373 00:31:10.373 Latency(us) 00:31:10.373 [2024-11-19T09:58:57.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:10.373 [2024-11-19T09:58:57.996Z] =================================================================================================================== 00:31:10.373 [2024-11-19T09:58:57.996Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:10.373 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1488600 00:31:10.373 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:10.373 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:10.373 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:10.373 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:10.373 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:10.373 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:10.373 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:10.373 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:10.373 rmmod nvme_tcp 00:31:10.632 rmmod nvme_fabrics 00:31:10.632 rmmod nvme_keyring 00:31:10.632 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:10.632 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:10.632 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:10.632 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1488575 ']' 00:31:10.632 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1488575 00:31:10.632 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1488575 ']' 00:31:10.632 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1488575 00:31:10.632 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:10.632 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:10.632 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1488575 00:31:10.632 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:10.632 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:10.632 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1488575' 00:31:10.632 killing process with pid 1488575 00:31:10.632 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1488575 00:31:10.632 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1488575 00:31:10.891 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:10.891 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:10.891 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:10.891 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:10.891 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:31:10.891 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:10.891 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:31:10.891 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:10.891 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:10.891 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.891 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.891 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.796 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:12.796 00:31:12.796 real 0m16.193s 00:31:12.796 user 0m22.283s 00:31:12.796 sys 0m3.455s 00:31:12.796 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:12.796 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:12.796 ************************************ 00:31:12.796 END TEST nvmf_queue_depth 00:31:12.796 ************************************ 00:31:12.796 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:12.796 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:12.796 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:12.796 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:12.796 ************************************ 00:31:12.796 START TEST nvmf_target_multipath 00:31:12.797 ************************************ 00:31:13.055 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:13.055 * Looking for test storage... 00:31:13.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:13.055 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:13.055 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:31:13.055 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:13.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.056 --rc genhtml_branch_coverage=1 00:31:13.056 --rc genhtml_function_coverage=1 00:31:13.056 --rc genhtml_legend=1 00:31:13.056 --rc geninfo_all_blocks=1 00:31:13.056 --rc geninfo_unexecuted_blocks=1 00:31:13.056 00:31:13.056 ' 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:13.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.056 --rc genhtml_branch_coverage=1 00:31:13.056 --rc genhtml_function_coverage=1 00:31:13.056 --rc genhtml_legend=1 00:31:13.056 --rc geninfo_all_blocks=1 00:31:13.056 --rc geninfo_unexecuted_blocks=1 00:31:13.056 00:31:13.056 ' 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:13.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.056 --rc genhtml_branch_coverage=1 00:31:13.056 --rc genhtml_function_coverage=1 00:31:13.056 --rc genhtml_legend=1 00:31:13.056 --rc geninfo_all_blocks=1 00:31:13.056 --rc geninfo_unexecuted_blocks=1 00:31:13.056 00:31:13.056 ' 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:13.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.056 --rc genhtml_branch_coverage=1 00:31:13.056 --rc genhtml_function_coverage=1 00:31:13.056 --rc genhtml_legend=1 00:31:13.056 --rc geninfo_all_blocks=1 00:31:13.056 --rc geninfo_unexecuted_blocks=1 00:31:13.056 00:31:13.056 ' 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:13.056 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:13.057 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:15.590 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:15.590 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:15.590 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:15.590 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:15.590 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:15.590 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:15.590 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:15.590 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:15.590 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:15.590 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:15.590 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:15.590 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:15.590 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:15.590 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:15.590 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:15.590 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:15.590 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:15.590 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:15.591 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:15.591 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:15.591 Found net devices under 0000:09:00.0: cvl_0_0 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:15.591 Found net devices under 0000:09:00.1: cvl_0_1 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:15.591 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:15.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:15.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:31:15.592 00:31:15.592 --- 10.0.0.2 ping statistics --- 00:31:15.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.592 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:15.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:15.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:31:15.592 00:31:15.592 --- 10.0.0.1 ping statistics --- 00:31:15.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.592 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:15.592 only one NIC for nvmf test 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:15.592 rmmod nvme_tcp 00:31:15.592 rmmod nvme_fabrics 00:31:15.592 rmmod nvme_keyring 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:15.592 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.498 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:17.498 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:17.498 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:17.498 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:17.498 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:17.498 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:17.498 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:17.498 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:17.498 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:17.498 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:17.498 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:17.498 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:17.498 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:17.499 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:17.499 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:17.499 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:17.499 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:17.499 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:17.499 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:17.499 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:17.499 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:17.499 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:17.499 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.499 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.499 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.499 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:17.499 00:31:17.499 real 0m4.501s 00:31:17.499 user 0m0.884s 00:31:17.499 sys 0m1.606s 00:31:17.499 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:17.499 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:17.499 ************************************ 00:31:17.499 END TEST nvmf_target_multipath 00:31:17.499 ************************************ 00:31:17.499 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:17.499 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:17.499 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:17.499 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:17.499 ************************************ 00:31:17.499 START TEST nvmf_zcopy 00:31:17.499 ************************************ 00:31:17.499 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:17.499 * Looking for test storage... 00:31:17.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:17.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.499 --rc genhtml_branch_coverage=1 00:31:17.499 --rc genhtml_function_coverage=1 00:31:17.499 --rc genhtml_legend=1 00:31:17.499 --rc geninfo_all_blocks=1 00:31:17.499 --rc geninfo_unexecuted_blocks=1 00:31:17.499 00:31:17.499 ' 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:17.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.499 --rc genhtml_branch_coverage=1 00:31:17.499 --rc genhtml_function_coverage=1 00:31:17.499 --rc genhtml_legend=1 00:31:17.499 --rc geninfo_all_blocks=1 00:31:17.499 --rc geninfo_unexecuted_blocks=1 00:31:17.499 00:31:17.499 ' 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:17.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.499 --rc genhtml_branch_coverage=1 00:31:17.499 --rc genhtml_function_coverage=1 00:31:17.499 --rc genhtml_legend=1 00:31:17.499 --rc geninfo_all_blocks=1 00:31:17.499 --rc geninfo_unexecuted_blocks=1 00:31:17.499 00:31:17.499 ' 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:17.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.499 --rc genhtml_branch_coverage=1 00:31:17.499 --rc genhtml_function_coverage=1 00:31:17.499 --rc genhtml_legend=1 00:31:17.499 --rc geninfo_all_blocks=1 00:31:17.499 --rc geninfo_unexecuted_blocks=1 00:31:17.499 00:31:17.499 ' 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:17.499 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.500 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.758 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:17.758 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:17.758 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:17.758 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:19.659 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:19.659 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:19.659 Found net devices under 0000:09:00.0: cvl_0_0 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:19.659 Found net devices under 0000:09:00.1: cvl_0_1 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:19.659 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:19.660 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:19.660 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:19.660 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:19.660 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:19.660 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:19.660 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:19.660 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:19.660 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:19.660 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:19.660 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:19.660 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:19.660 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:19.660 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:19.660 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:19.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:19.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:31:19.918 00:31:19.918 --- 10.0.0.2 ping statistics --- 00:31:19.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.918 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:19.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:19.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:31:19.918 00:31:19.918 --- 10.0.0.1 ping statistics --- 00:31:19.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.918 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1493772 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1493772 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1493772 ']' 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:19.918 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:19.918 [2024-11-19 10:59:07.412955] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:19.918 [2024-11-19 10:59:07.414021] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:31:19.918 [2024-11-19 10:59:07.414072] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:19.918 [2024-11-19 10:59:07.483626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.918 [2024-11-19 10:59:07.536453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:19.918 [2024-11-19 10:59:07.536509] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:19.918 [2024-11-19 10:59:07.536535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:19.918 [2024-11-19 10:59:07.536549] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:19.918 [2024-11-19 10:59:07.536560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:19.918 [2024-11-19 10:59:07.537193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.177 [2024-11-19 10:59:07.621900] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:20.177 [2024-11-19 10:59:07.622188] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:20.177 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:20.177 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:31:20.177 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:20.177 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:20.177 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:20.177 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:20.177 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:31:20.177 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:20.177 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.177 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:20.178 [2024-11-19 10:59:07.673818] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:20.178 [2024-11-19 10:59:07.689967] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:20.178 malloc0 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:20.178 { 00:31:20.178 "params": { 00:31:20.178 "name": "Nvme$subsystem", 00:31:20.178 "trtype": "$TEST_TRANSPORT", 00:31:20.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:20.178 "adrfam": "ipv4", 00:31:20.178 "trsvcid": "$NVMF_PORT", 00:31:20.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:20.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:20.178 "hdgst": ${hdgst:-false}, 00:31:20.178 "ddgst": ${ddgst:-false} 00:31:20.178 }, 00:31:20.178 "method": "bdev_nvme_attach_controller" 00:31:20.178 } 00:31:20.178 EOF 00:31:20.178 )") 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:20.178 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:20.178 "params": { 00:31:20.178 "name": "Nvme1", 00:31:20.178 "trtype": "tcp", 00:31:20.178 "traddr": "10.0.0.2", 00:31:20.178 "adrfam": "ipv4", 00:31:20.178 "trsvcid": "4420", 00:31:20.178 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:20.178 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:20.178 "hdgst": false, 00:31:20.178 "ddgst": false 00:31:20.178 }, 00:31:20.178 "method": "bdev_nvme_attach_controller" 00:31:20.178 }' 00:31:20.178 [2024-11-19 10:59:07.785764] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:31:20.178 [2024-11-19 10:59:07.785842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1493805 ] 00:31:20.436 [2024-11-19 10:59:07.854768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.436 [2024-11-19 10:59:07.914620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.694 Running I/O for 10 seconds... 00:31:22.617 5612.00 IOPS, 43.84 MiB/s [2024-11-19T09:59:11.613Z] 5674.50 IOPS, 44.33 MiB/s [2024-11-19T09:59:12.547Z] 5685.33 IOPS, 44.42 MiB/s [2024-11-19T09:59:13.481Z] 5694.25 IOPS, 44.49 MiB/s [2024-11-19T09:59:14.415Z] 5702.40 IOPS, 44.55 MiB/s [2024-11-19T09:59:15.349Z] 5707.67 IOPS, 44.59 MiB/s [2024-11-19T09:59:16.283Z] 5705.86 IOPS, 44.58 MiB/s [2024-11-19T09:59:17.657Z] 5708.50 IOPS, 44.60 MiB/s [2024-11-19T09:59:18.592Z] 5704.78 IOPS, 44.57 MiB/s [2024-11-19T09:59:18.592Z] 5714.50 IOPS, 44.64 MiB/s 00:31:30.969 Latency(us) 00:31:30.969 [2024-11-19T09:59:18.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.969 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:30.969 Verification LBA range: start 0x0 length 0x1000 00:31:30.969 Nvme1n1 : 10.02 5717.32 44.67 0.00 0.00 22328.44 3907.89 29127.11 00:31:30.969 [2024-11-19T09:59:18.592Z] =================================================================================================================== 00:31:30.969 [2024-11-19T09:59:18.592Z] Total : 5717.32 44.67 0.00 0.00 22328.44 3907.89 29127.11 00:31:30.969 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1495104 00:31:30.969 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:30.969 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:30.969 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:30.969 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:30.969 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:30.969 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:30.969 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:30.969 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:30.969 { 00:31:30.969 "params": { 00:31:30.969 "name": "Nvme$subsystem", 00:31:30.969 "trtype": "$TEST_TRANSPORT", 00:31:30.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.969 "adrfam": "ipv4", 00:31:30.969 "trsvcid": "$NVMF_PORT", 00:31:30.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.969 "hdgst": ${hdgst:-false}, 00:31:30.969 "ddgst": ${ddgst:-false} 00:31:30.969 }, 00:31:30.969 "method": "bdev_nvme_attach_controller" 00:31:30.969 } 00:31:30.969 EOF 00:31:30.969 )") 00:31:30.969 [2024-11-19 10:59:18.493816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.969 [2024-11-19 10:59:18.493854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.969 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:30.969 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:30.969 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:30.969 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:30.969 "params": { 00:31:30.969 "name": "Nvme1", 00:31:30.969 "trtype": "tcp", 00:31:30.969 "traddr": "10.0.0.2", 00:31:30.969 "adrfam": "ipv4", 00:31:30.969 "trsvcid": "4420", 00:31:30.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:30.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:30.969 "hdgst": false, 00:31:30.969 "ddgst": false 00:31:30.969 }, 00:31:30.969 "method": "bdev_nvme_attach_controller" 00:31:30.969 }' 00:31:30.969 [2024-11-19 10:59:18.501725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.969 [2024-11-19 10:59:18.501755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.969 [2024-11-19 10:59:18.509718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.969 [2024-11-19 10:59:18.509738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.969 [2024-11-19 10:59:18.517702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.969 [2024-11-19 10:59:18.517721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.969 [2024-11-19 10:59:18.525718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.969 [2024-11-19 10:59:18.525738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.969 [2024-11-19 10:59:18.533704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.969 [2024-11-19 10:59:18.533723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.969 [2024-11-19 10:59:18.534922] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:31:30.969 [2024-11-19 10:59:18.534977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495104 ] 00:31:30.969 [2024-11-19 10:59:18.541717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.969 [2024-11-19 10:59:18.541737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.969 [2024-11-19 10:59:18.549720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.969 [2024-11-19 10:59:18.549740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.969 [2024-11-19 10:59:18.557707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.969 [2024-11-19 10:59:18.557727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.969 [2024-11-19 10:59:18.565721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.969 [2024-11-19 10:59:18.565741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.969 [2024-11-19 10:59:18.573719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.969 [2024-11-19 10:59:18.573738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.969 [2024-11-19 10:59:18.581716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.969 [2024-11-19 10:59:18.581735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.969 [2024-11-19 10:59:18.589719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.969 [2024-11-19 10:59:18.589738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.226 [2024-11-19 10:59:18.597716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.226 [2024-11-19 10:59:18.597735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.226 [2024-11-19 10:59:18.604441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:31.226 [2024-11-19 10:59:18.605717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.226 [2024-11-19 10:59:18.605738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.226 [2024-11-19 10:59:18.613729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.226 [2024-11-19 10:59:18.613755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.226 [2024-11-19 10:59:18.621733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.226 [2024-11-19 10:59:18.621764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.226 [2024-11-19 10:59:18.629719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.226 [2024-11-19 10:59:18.629738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.226 [2024-11-19 10:59:18.637719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.226 [2024-11-19 10:59:18.637739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.226 [2024-11-19 10:59:18.645717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.226 [2024-11-19 10:59:18.645736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.226 [2024-11-19 10:59:18.653702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.226 [2024-11-19 10:59:18.653721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.226 [2024-11-19 10:59:18.661718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.226 [2024-11-19 10:59:18.661736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.226 [2024-11-19 10:59:18.666393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.226 [2024-11-19 10:59:18.669717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.226 [2024-11-19 10:59:18.669736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.226 [2024-11-19 10:59:18.677717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.226 [2024-11-19 10:59:18.677736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.226 [2024-11-19 10:59:18.685747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.226 [2024-11-19 10:59:18.685775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.226 [2024-11-19 10:59:18.693748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.226 [2024-11-19 10:59:18.693778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.226 [2024-11-19 10:59:18.701736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.226 [2024-11-19 10:59:18.701766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.226 [2024-11-19 10:59:18.709743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.226 [2024-11-19 10:59:18.709772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.226 [2024-11-19 10:59:18.717745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.226 [2024-11-19 10:59:18.717774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.226 [2024-11-19 10:59:18.725748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.226 [2024-11-19 10:59:18.725777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.226 [2024-11-19 10:59:18.733733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.226 [2024-11-19 10:59:18.733757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.226 [2024-11-19 10:59:18.741706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.227 [2024-11-19 10:59:18.741726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.227 [2024-11-19 10:59:18.749727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.227 [2024-11-19 10:59:18.749757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.227 [2024-11-19 10:59:18.757735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.227 [2024-11-19 10:59:18.757765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.227 [2024-11-19 10:59:18.765708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.227 [2024-11-19 10:59:18.765728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.227 [2024-11-19 10:59:18.773719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.227 [2024-11-19 10:59:18.773738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.227 [2024-11-19 10:59:18.781725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.227 [2024-11-19 10:59:18.781756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.227 [2024-11-19 10:59:18.789736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.227 [2024-11-19 10:59:18.789760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.227 [2024-11-19 10:59:18.797737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.227 [2024-11-19 10:59:18.797761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.227 [2024-11-19 10:59:18.805708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.227 [2024-11-19 10:59:18.805731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.227 [2024-11-19 10:59:18.813722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.227 [2024-11-19 10:59:18.813744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.227 [2024-11-19 10:59:18.821718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.227 [2024-11-19 10:59:18.821739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.227 [2024-11-19 10:59:18.829719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.227 [2024-11-19 10:59:18.829739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.227 [2024-11-19 10:59:18.837703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.227 [2024-11-19 10:59:18.837722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.227 [2024-11-19 10:59:18.845707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.227 [2024-11-19 10:59:18.845727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:18.853723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:18.853745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:18.861708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:18.861730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:18.869718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:18.869738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:18.877720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:18.877739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:18.885728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:18.885747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:18.893703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:18.893723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:18.901737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:18.901761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:18.909719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:18.909740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:18.917703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:18.917723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:18.925703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:18.925723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:18.933703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:18.933723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:18.941718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:18.941738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:18.949723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:18.949745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:18.957703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:18.957724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:18.965703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:18.965722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:18.973702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:18.973722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:18.981706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:18.981725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:18.989721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:18.989741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:18.997723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:18.997746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:19.005749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:19.005774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 Running I/O for 5 seconds... 00:31:31.485 [2024-11-19 10:59:19.013721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:19.013748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:19.027319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:19.027363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:19.043789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:19.043816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:19.053473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:19.053501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.485 [2024-11-19 10:59:19.065821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.485 [2024-11-19 10:59:19.065847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.486 [2024-11-19 10:59:19.076156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.486 [2024-11-19 10:59:19.076184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.486 [2024-11-19 10:59:19.089463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.486 [2024-11-19 10:59:19.089506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.486 [2024-11-19 10:59:19.099045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.486 [2024-11-19 10:59:19.099070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.744 [2024-11-19 10:59:19.110908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.744 [2024-11-19 10:59:19.110934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.744 [2024-11-19 10:59:19.121459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.744 [2024-11-19 10:59:19.121487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.744 [2024-11-19 10:59:19.132382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.744 [2024-11-19 10:59:19.132408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.744 [2024-11-19 10:59:19.146450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.744 [2024-11-19 10:59:19.146484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.744 [2024-11-19 10:59:19.155893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.744 [2024-11-19 10:59:19.155919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.744 [2024-11-19 10:59:19.167806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.744 [2024-11-19 10:59:19.167847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.744 [2024-11-19 10:59:19.184017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.744 [2024-11-19 10:59:19.184059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.744 [2024-11-19 10:59:19.193396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.744 [2024-11-19 10:59:19.193424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.744 [2024-11-19 10:59:19.205272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.744 [2024-11-19 10:59:19.205300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.744 [2024-11-19 10:59:19.216404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.744 [2024-11-19 10:59:19.216431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.744 [2024-11-19 10:59:19.230710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.744 [2024-11-19 10:59:19.230737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.744 [2024-11-19 10:59:19.240475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.744 [2024-11-19 10:59:19.240504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.744 [2024-11-19 10:59:19.252669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.744 [2024-11-19 10:59:19.252694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.744 [2024-11-19 10:59:19.266929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.744 [2024-11-19 10:59:19.266957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.744 [2024-11-19 10:59:19.276528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.744 [2024-11-19 10:59:19.276556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.744 [2024-11-19 10:59:19.288385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.744 [2024-11-19 10:59:19.288412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.744 [2024-11-19 10:59:19.302980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.744 [2024-11-19 10:59:19.303007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.744 [2024-11-19 10:59:19.312693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.744 [2024-11-19 10:59:19.312721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.744 [2024-11-19 10:59:19.326458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.744 [2024-11-19 10:59:19.326487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.744 [2024-11-19 10:59:19.335922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.744 [2024-11-19 10:59:19.335948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.745 [2024-11-19 10:59:19.347865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.745 [2024-11-19 10:59:19.347892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.745 [2024-11-19 10:59:19.362362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.745 [2024-11-19 10:59:19.362391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.003 [2024-11-19 10:59:19.371884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.003 [2024-11-19 10:59:19.371910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.003 [2024-11-19 10:59:19.383936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.003 [2024-11-19 10:59:19.383963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.003 [2024-11-19 10:59:19.399993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.003 [2024-11-19 10:59:19.400020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.003 [2024-11-19 10:59:19.417643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.003 [2024-11-19 10:59:19.417669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.003 [2024-11-19 10:59:19.427389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.003 [2024-11-19 10:59:19.427415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.003 [2024-11-19 10:59:19.442014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.003 [2024-11-19 10:59:19.442040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.003 [2024-11-19 10:59:19.451406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.003 [2024-11-19 10:59:19.451433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.003 [2024-11-19 10:59:19.467470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.003 [2024-11-19 10:59:19.467496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.003 [2024-11-19 10:59:19.477289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.003 [2024-11-19 10:59:19.477324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.003 [2024-11-19 10:59:19.489243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.003 [2024-11-19 10:59:19.489269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.003 [2024-11-19 10:59:19.501708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.003 [2024-11-19 10:59:19.501737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.003 [2024-11-19 10:59:19.510916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.003 [2024-11-19 10:59:19.510942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.004 [2024-11-19 10:59:19.526692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.004 [2024-11-19 10:59:19.526718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.004 [2024-11-19 10:59:19.536046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.004 [2024-11-19 10:59:19.536073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.004 [2024-11-19 10:59:19.547881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.004 [2024-11-19 10:59:19.547908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.004 [2024-11-19 10:59:19.563730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.004 [2024-11-19 10:59:19.563771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.004 [2024-11-19 10:59:19.573212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.004 [2024-11-19 10:59:19.573262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.004 [2024-11-19 10:59:19.584668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.004 [2024-11-19 10:59:19.584695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.004 [2024-11-19 10:59:19.598291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.004 [2024-11-19 10:59:19.598328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.004 [2024-11-19 10:59:19.607713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.004 [2024-11-19 10:59:19.607753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.004 [2024-11-19 10:59:19.619414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.004 [2024-11-19 10:59:19.619442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.262 [2024-11-19 10:59:19.634986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.262 [2024-11-19 10:59:19.635013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.262 [2024-11-19 10:59:19.644236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.262 [2024-11-19 10:59:19.644260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.262 [2024-11-19 10:59:19.658070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.262 [2024-11-19 10:59:19.658097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.262 [2024-11-19 10:59:19.667116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.262 [2024-11-19 10:59:19.667143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.262 [2024-11-19 10:59:19.679372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.262 [2024-11-19 10:59:19.679399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.262 [2024-11-19 10:59:19.695207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.262 [2024-11-19 10:59:19.695232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.262 [2024-11-19 10:59:19.705035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.262 [2024-11-19 10:59:19.705061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.262 [2024-11-19 10:59:19.716694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.262 [2024-11-19 10:59:19.716720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.262 [2024-11-19 10:59:19.729555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.262 [2024-11-19 10:59:19.729582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.263 [2024-11-19 10:59:19.738857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.263 [2024-11-19 10:59:19.738884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.263 [2024-11-19 10:59:19.750789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.263 [2024-11-19 10:59:19.750815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.263 [2024-11-19 10:59:19.761725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.263 [2024-11-19 10:59:19.761766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.263 [2024-11-19 10:59:19.772089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.263 [2024-11-19 10:59:19.772130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.263 [2024-11-19 10:59:19.787761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.263 [2024-11-19 10:59:19.787788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.263 [2024-11-19 10:59:19.797482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.263 [2024-11-19 10:59:19.797519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.263 [2024-11-19 10:59:19.808690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.263 [2024-11-19 10:59:19.808716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.263 [2024-11-19 10:59:19.823262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.263 [2024-11-19 10:59:19.823290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.263 [2024-11-19 10:59:19.832586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.263 [2024-11-19 10:59:19.832629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.263 [2024-11-19 10:59:19.846865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.263 [2024-11-19 10:59:19.846892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.263 [2024-11-19 10:59:19.856631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.263 [2024-11-19 10:59:19.856657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.263 [2024-11-19 10:59:19.870498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.263 [2024-11-19 10:59:19.870525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.263 [2024-11-19 10:59:19.879806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.263 [2024-11-19 10:59:19.879833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.521 [2024-11-19 10:59:19.891464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.521 [2024-11-19 10:59:19.891492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.521 [2024-11-19 10:59:19.907235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.521 [2024-11-19 10:59:19.907261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.521 [2024-11-19 10:59:19.916321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.521 [2024-11-19 10:59:19.916347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.521 [2024-11-19 10:59:19.931338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.521 [2024-11-19 10:59:19.931368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.521 [2024-11-19 10:59:19.940526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.521 [2024-11-19 10:59:19.940554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.521 [2024-11-19 10:59:19.954620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.521 [2024-11-19 10:59:19.954661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.521 [2024-11-19 10:59:19.964480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.521 [2024-11-19 10:59:19.964508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.521 [2024-11-19 10:59:19.980557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.521 [2024-11-19 10:59:19.980586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.521 [2024-11-19 10:59:19.996095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.521 [2024-11-19 10:59:19.996137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.521 [2024-11-19 10:59:20.013790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.521 [2024-11-19 10:59:20.013845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.521 11693.00 IOPS, 91.35 MiB/s [2024-11-19T09:59:20.144Z] [2024-11-19 10:59:20.028089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.521 [2024-11-19 10:59:20.028123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.521 [2024-11-19 10:59:20.042531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.521 [2024-11-19 10:59:20.042577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.521 [2024-11-19 10:59:20.057284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.521 [2024-11-19 10:59:20.057326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.521 [2024-11-19 10:59:20.067058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.521 [2024-11-19 10:59:20.067086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.521 [2024-11-19 10:59:20.082795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.521 [2024-11-19 10:59:20.082822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.521 [2024-11-19 10:59:20.092976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.521 [2024-11-19 10:59:20.093005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.521 [2024-11-19 10:59:20.104552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.521 [2024-11-19 10:59:20.104596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.521 [2024-11-19 10:59:20.116671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.521 [2024-11-19 10:59:20.116699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.521 [2024-11-19 10:59:20.130925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.521 [2024-11-19 10:59:20.130953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.521 [2024-11-19 10:59:20.140221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.521 [2024-11-19 10:59:20.140247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.779 [2024-11-19 10:59:20.152111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.780 [2024-11-19 10:59:20.152137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.780 [2024-11-19 10:59:20.166955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.780 [2024-11-19 10:59:20.166981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.780 [2024-11-19 10:59:20.176667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.780 [2024-11-19 10:59:20.176708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.780 [2024-11-19 10:59:20.191991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.780 [2024-11-19 10:59:20.192017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.780 [2024-11-19 10:59:20.210583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.780 [2024-11-19 10:59:20.210610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.780 [2024-11-19 10:59:20.219905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.780 [2024-11-19 10:59:20.219931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.780 [2024-11-19 10:59:20.231685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.780 [2024-11-19 10:59:20.231713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.780 [2024-11-19 10:59:20.246186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.780 [2024-11-19 10:59:20.246214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.780 [2024-11-19 10:59:20.255435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.780 [2024-11-19 10:59:20.255462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.780 [2024-11-19 10:59:20.270898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.780 [2024-11-19 10:59:20.270924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.780 [2024-11-19 10:59:20.280258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.780 [2024-11-19 10:59:20.280300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.780 [2024-11-19 10:59:20.294894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.780 [2024-11-19 10:59:20.294922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.780 [2024-11-19 10:59:20.304713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.780 [2024-11-19 10:59:20.304741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.780 [2024-11-19 10:59:20.318812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.780 [2024-11-19 10:59:20.318839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.780 [2024-11-19 10:59:20.328294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.780 [2024-11-19 10:59:20.328343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.780 [2024-11-19 10:59:20.343219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.780 [2024-11-19 10:59:20.343245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.780 [2024-11-19 10:59:20.352794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.780 [2024-11-19 10:59:20.352822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.780 [2024-11-19 10:59:20.367272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.780 [2024-11-19 10:59:20.367322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.780 [2024-11-19 10:59:20.377163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.780 [2024-11-19 10:59:20.377206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.780 [2024-11-19 10:59:20.389002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.780 [2024-11-19 10:59:20.389044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.780 [2024-11-19 10:59:20.401340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.780 [2024-11-19 10:59:20.401368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.038 [2024-11-19 10:59:20.410690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.038 [2024-11-19 10:59:20.410718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.038 [2024-11-19 10:59:20.422527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.038 [2024-11-19 10:59:20.422557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.038 [2024-11-19 10:59:20.433209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.038 [2024-11-19 10:59:20.433248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.038 [2024-11-19 10:59:20.445646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.038 [2024-11-19 10:59:20.445674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.038 [2024-11-19 10:59:20.455113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.038 [2024-11-19 10:59:20.455138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.038 [2024-11-19 10:59:20.466856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.038 [2024-11-19 10:59:20.466882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.038 [2024-11-19 10:59:20.476866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.038 [2024-11-19 10:59:20.476890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.038 [2024-11-19 10:59:20.492641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.038 [2024-11-19 10:59:20.492679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.038 [2024-11-19 10:59:20.507580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.038 [2024-11-19 10:59:20.507621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.038 [2024-11-19 10:59:20.517011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.038 [2024-11-19 10:59:20.517035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.038 [2024-11-19 10:59:20.528931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.038 [2024-11-19 10:59:20.528954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.038 [2024-11-19 10:59:20.539686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.038 [2024-11-19 10:59:20.539710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.038 [2024-11-19 10:59:20.555782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.038 [2024-11-19 10:59:20.555807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.038 [2024-11-19 10:59:20.565056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.038 [2024-11-19 10:59:20.565080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.038 [2024-11-19 10:59:20.577268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.038 [2024-11-19 10:59:20.577314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.038 [2024-11-19 10:59:20.587853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.038 [2024-11-19 10:59:20.587877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.038 [2024-11-19 10:59:20.603495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.038 [2024-11-19 10:59:20.603522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.038 [2024-11-19 10:59:20.613026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.038 [2024-11-19 10:59:20.613050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.038 [2024-11-19 10:59:20.624916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.038 [2024-11-19 10:59:20.624942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.038 [2024-11-19 10:59:20.637616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.038 [2024-11-19 10:59:20.637659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.038 [2024-11-19 10:59:20.646686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.038 [2024-11-19 10:59:20.646710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.038 [2024-11-19 10:59:20.658644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.038 [2024-11-19 10:59:20.658668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.297 [2024-11-19 10:59:20.669683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.297 [2024-11-19 10:59:20.669721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.297 [2024-11-19 10:59:20.681068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.297 [2024-11-19 10:59:20.681092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.297 [2024-11-19 10:59:20.695043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.297 [2024-11-19 10:59:20.695069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.297 [2024-11-19 10:59:20.704531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.297 [2024-11-19 10:59:20.704557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.297 [2024-11-19 10:59:20.718664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.297 [2024-11-19 10:59:20.718688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.297 [2024-11-19 10:59:20.727984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.297 [2024-11-19 10:59:20.728008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.297 [2024-11-19 10:59:20.740074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.297 [2024-11-19 10:59:20.740099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.297 [2024-11-19 10:59:20.755913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.297 [2024-11-19 10:59:20.755937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.297 [2024-11-19 10:59:20.765843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.297 [2024-11-19 10:59:20.765867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.297 [2024-11-19 10:59:20.777838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.297 [2024-11-19 10:59:20.777863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.297 [2024-11-19 10:59:20.788674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.297 [2024-11-19 10:59:20.788720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.297 [2024-11-19 10:59:20.802459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.297 [2024-11-19 10:59:20.802487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.297 [2024-11-19 10:59:20.811999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.297 [2024-11-19 10:59:20.812023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.297 [2024-11-19 10:59:20.823555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.297 [2024-11-19 10:59:20.823596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.297 [2024-11-19 10:59:20.839738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.297 [2024-11-19 10:59:20.839777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.297 [2024-11-19 10:59:20.848550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.297 [2024-11-19 10:59:20.848590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.297 [2024-11-19 10:59:20.860524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.297 [2024-11-19 10:59:20.860549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.297 [2024-11-19 10:59:20.873200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.297 [2024-11-19 10:59:20.873225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.297 [2024-11-19 10:59:20.882812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.297 [2024-11-19 10:59:20.882837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.297 [2024-11-19 10:59:20.899206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.297 [2024-11-19 10:59:20.899230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.297 [2024-11-19 10:59:20.909467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.297 [2024-11-19 10:59:20.909494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.556 [2024-11-19 10:59:20.921368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.556 [2024-11-19 10:59:20.921395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.556 [2024-11-19 10:59:20.932378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.556 [2024-11-19 10:59:20.932403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.556 [2024-11-19 10:59:20.948369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.556 [2024-11-19 10:59:20.948397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.556 [2024-11-19 10:59:20.963603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.556 [2024-11-19 10:59:20.963629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.556 [2024-11-19 10:59:20.972773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.556 [2024-11-19 10:59:20.972798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.556 [2024-11-19 10:59:20.984995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.556 [2024-11-19 10:59:20.985019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.556 [2024-11-19 10:59:20.996073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.556 [2024-11-19 10:59:20.996112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.556 [2024-11-19 10:59:21.008948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.556 [2024-11-19 10:59:21.008973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.556 11665.50 IOPS, 91.14 MiB/s [2024-11-19T09:59:21.179Z] [2024-11-19 10:59:21.023776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.556 [2024-11-19 10:59:21.023801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.556 [2024-11-19 10:59:21.033254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.556 [2024-11-19 10:59:21.033279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.556 [2024-11-19 10:59:21.045415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.556 [2024-11-19 10:59:21.045443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.556 [2024-11-19 10:59:21.056443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.556 [2024-11-19 10:59:21.056469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.556 [2024-11-19 10:59:21.069556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.556 [2024-11-19 10:59:21.069583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.556 [2024-11-19 10:59:21.078952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.556 [2024-11-19 10:59:21.078977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.556 [2024-11-19 10:59:21.090567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.556 [2024-11-19 10:59:21.090612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.556 [2024-11-19 10:59:21.100843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.556 [2024-11-19 10:59:21.100867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.556 [2024-11-19 10:59:21.113545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.556 [2024-11-19 10:59:21.113572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.556 [2024-11-19 10:59:21.122801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.556 [2024-11-19 10:59:21.122840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.556 [2024-11-19 10:59:21.134723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.556 [2024-11-19 10:59:21.134763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.556 [2024-11-19 10:59:21.145571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.556 [2024-11-19 10:59:21.145610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.556 [2024-11-19 10:59:21.156657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.556 [2024-11-19 10:59:21.156682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.556 [2024-11-19 10:59:21.169650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.556 [2024-11-19 10:59:21.169684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.814 [2024-11-19 10:59:21.179348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.814 [2024-11-19 10:59:21.179375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.814 [2024-11-19 10:59:21.191253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.814 [2024-11-19 10:59:21.191276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.814 [2024-11-19 10:59:21.206571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.814 [2024-11-19 10:59:21.206598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.814 [2024-11-19 10:59:21.215798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.814 [2024-11-19 10:59:21.215822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.814 [2024-11-19 10:59:21.227265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.814 [2024-11-19 10:59:21.227289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.814 [2024-11-19 10:59:21.243695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.814 [2024-11-19 10:59:21.243720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.814 [2024-11-19 10:59:21.253339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.814 [2024-11-19 10:59:21.253379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.814 [2024-11-19 10:59:21.265146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.814 [2024-11-19 10:59:21.265184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.814 [2024-11-19 10:59:21.277656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.814 [2024-11-19 10:59:21.277681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.814 [2024-11-19 10:59:21.287406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.814 [2024-11-19 10:59:21.287431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.814 [2024-11-19 10:59:21.299040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.814 [2024-11-19 10:59:21.299064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.814 [2024-11-19 10:59:21.315618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.814 [2024-11-19 10:59:21.315658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.814 [2024-11-19 10:59:21.325104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.814 [2024-11-19 10:59:21.325128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.814 [2024-11-19 10:59:21.337002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.814 [2024-11-19 10:59:21.337026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.814 [2024-11-19 10:59:21.349666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.814 [2024-11-19 10:59:21.349692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.814 [2024-11-19 10:59:21.359121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.814 [2024-11-19 10:59:21.359145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.814 [2024-11-19 10:59:21.370851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.814 [2024-11-19 10:59:21.370874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.814 [2024-11-19 10:59:21.381684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.814 [2024-11-19 10:59:21.381707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.814 [2024-11-19 10:59:21.391991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.815 [2024-11-19 10:59:21.392022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.815 [2024-11-19 10:59:21.405709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.815 [2024-11-19 10:59:21.405748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.815 [2024-11-19 10:59:21.415114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.815 [2024-11-19 10:59:21.415139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.815 [2024-11-19 10:59:21.431035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.815 [2024-11-19 10:59:21.431058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.073 [2024-11-19 10:59:21.440652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.073 [2024-11-19 10:59:21.440677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.073 [2024-11-19 10:59:21.454800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.073 [2024-11-19 10:59:21.454825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.073 [2024-11-19 10:59:21.465030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.073 [2024-11-19 10:59:21.465054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.073 [2024-11-19 10:59:21.476995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.073 [2024-11-19 10:59:21.477021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.073 [2024-11-19 10:59:21.487875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.073 [2024-11-19 10:59:21.487908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.073 [2024-11-19 10:59:21.500721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.073 [2024-11-19 10:59:21.500749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.073 [2024-11-19 10:59:21.514270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.073 [2024-11-19 10:59:21.514321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.073 [2024-11-19 10:59:21.523959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.073 [2024-11-19 10:59:21.523985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.073 [2024-11-19 10:59:21.538386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.073 [2024-11-19 10:59:21.538413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.073 [2024-11-19 10:59:21.548277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.073 [2024-11-19 10:59:21.548327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.073 [2024-11-19 10:59:21.563274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.073 [2024-11-19 10:59:21.563331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.073 [2024-11-19 10:59:21.572851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.073 [2024-11-19 10:59:21.572877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.073 [2024-11-19 10:59:21.587849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.073 [2024-11-19 10:59:21.587875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.073 [2024-11-19 10:59:21.597986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.073 [2024-11-19 10:59:21.598011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.073 [2024-11-19 10:59:21.610225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.073 [2024-11-19 10:59:21.610249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.073 [2024-11-19 10:59:21.621031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.073 [2024-11-19 10:59:21.621061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.073 [2024-11-19 10:59:21.634982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.073 [2024-11-19 10:59:21.635008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.073 [2024-11-19 10:59:21.644021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.073 [2024-11-19 10:59:21.644045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.073 [2024-11-19 10:59:21.655805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.073 [2024-11-19 10:59:21.655829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.073 [2024-11-19 10:59:21.669639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.073 [2024-11-19 10:59:21.669664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.073 [2024-11-19 10:59:21.679662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.073 [2024-11-19 10:59:21.679686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.073 [2024-11-19 10:59:21.691544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.073 [2024-11-19 10:59:21.691571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.330 [2024-11-19 10:59:21.707431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.331 [2024-11-19 10:59:21.707458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.331 [2024-11-19 10:59:21.717144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.331 [2024-11-19 10:59:21.717170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.331 [2024-11-19 10:59:21.728724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.331 [2024-11-19 10:59:21.728761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.331 [2024-11-19 10:59:21.742540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.331 [2024-11-19 10:59:21.742567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.331 [2024-11-19 10:59:21.751546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.331 [2024-11-19 10:59:21.751586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.331 [2024-11-19 10:59:21.763535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.331 [2024-11-19 10:59:21.763561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.331 [2024-11-19 10:59:21.779934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.331 [2024-11-19 10:59:21.779958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.331 [2024-11-19 10:59:21.789717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.331 [2024-11-19 10:59:21.789741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.331 [2024-11-19 10:59:21.801528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.331 [2024-11-19 10:59:21.801553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.331 [2024-11-19 10:59:21.812198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.331 [2024-11-19 10:59:21.812222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.331 [2024-11-19 10:59:21.827428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.331 [2024-11-19 10:59:21.827469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.331 [2024-11-19 10:59:21.837171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.331 [2024-11-19 10:59:21.837196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.331 [2024-11-19 10:59:21.848819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.331 [2024-11-19 10:59:21.848843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.331 [2024-11-19 10:59:21.859345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.331 [2024-11-19 10:59:21.859371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.331 [2024-11-19 10:59:21.871129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.331 [2024-11-19 10:59:21.871153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.331 [2024-11-19 10:59:21.886194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.331 [2024-11-19 10:59:21.886220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.331 [2024-11-19 10:59:21.896576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.331 [2024-11-19 10:59:21.896617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.331 [2024-11-19 10:59:21.911070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.331 [2024-11-19 10:59:21.911094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.331 [2024-11-19 10:59:21.920499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.331 [2024-11-19 10:59:21.920524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.331 [2024-11-19 10:59:21.934629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.331 [2024-11-19 10:59:21.934653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.331 [2024-11-19 10:59:21.943765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.331 [2024-11-19 10:59:21.943805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.589 [2024-11-19 10:59:21.955092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.589 [2024-11-19 10:59:21.955119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.589 [2024-11-19 10:59:21.970156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.589 [2024-11-19 10:59:21.970180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.589 [2024-11-19 10:59:21.979679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.589 [2024-11-19 10:59:21.979703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.589 [2024-11-19 10:59:21.991474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.589 [2024-11-19 10:59:21.991499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.589 [2024-11-19 10:59:22.006584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.589 [2024-11-19 10:59:22.006610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.589 [2024-11-19 10:59:22.016028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.589 [2024-11-19 10:59:22.016052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.589 11667.67 IOPS, 91.15 MiB/s [2024-11-19T09:59:22.212Z] [2024-11-19 10:59:22.031942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.589 [2024-11-19 10:59:22.031965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.589 [2024-11-19 10:59:22.041367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.589 [2024-11-19 10:59:22.041393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.589 [2024-11-19 10:59:22.053455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.589 [2024-11-19 10:59:22.053480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.589 [2024-11-19 10:59:22.064426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.589 [2024-11-19 10:59:22.064452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.589 [2024-11-19 10:59:22.079344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.589 [2024-11-19 10:59:22.079371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.589 [2024-11-19 10:59:22.088869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.589 [2024-11-19 10:59:22.088892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.589 [2024-11-19 10:59:22.103712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.589 [2024-11-19 10:59:22.103740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.589 [2024-11-19 10:59:22.113909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.589 [2024-11-19 10:59:22.113933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.589 [2024-11-19 10:59:22.125605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.589 [2024-11-19 10:59:22.125629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.589 [2024-11-19 10:59:22.136159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.589 [2024-11-19 10:59:22.136182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.589 [2024-11-19 10:59:22.151055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.589 [2024-11-19 10:59:22.151080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.589 [2024-11-19 10:59:22.160319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.589 [2024-11-19 10:59:22.160346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.589 [2024-11-19 10:59:22.172094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.589 [2024-11-19 10:59:22.172119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.589 [2024-11-19 10:59:22.188537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.589 [2024-11-19 10:59:22.188562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.589 [2024-11-19 10:59:22.204269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.589 [2024-11-19 10:59:22.204293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.847 [2024-11-19 10:59:22.219897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.847 [2024-11-19 10:59:22.219922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.847 [2024-11-19 10:59:22.229538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.847 [2024-11-19 10:59:22.229563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.847 [2024-11-19 10:59:22.241612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.847 [2024-11-19 10:59:22.241638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.848 [2024-11-19 10:59:22.252615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.848 [2024-11-19 10:59:22.252653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.848 [2024-11-19 10:59:22.267426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.848 [2024-11-19 10:59:22.267453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.848 [2024-11-19 10:59:22.276832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.848 [2024-11-19 10:59:22.276856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.848 [2024-11-19 10:59:22.289011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.848 [2024-11-19 10:59:22.289037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.848 [2024-11-19 10:59:22.302114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.848 [2024-11-19 10:59:22.302145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.848 [2024-11-19 10:59:22.311353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.848 [2024-11-19 10:59:22.311380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.848 [2024-11-19 10:59:22.323002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.848 [2024-11-19 10:59:22.323026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.848 [2024-11-19 10:59:22.338829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.848 [2024-11-19 10:59:22.338854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.848 [2024-11-19 10:59:22.348404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.848 [2024-11-19 10:59:22.348441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.848 [2024-11-19 10:59:22.362724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.848 [2024-11-19 10:59:22.362763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.848 [2024-11-19 10:59:22.372532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.848 [2024-11-19 10:59:22.372559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.848 [2024-11-19 10:59:22.386627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.848 [2024-11-19 10:59:22.386675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.848 [2024-11-19 10:59:22.395900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.848 [2024-11-19 10:59:22.395924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.848 [2024-11-19 10:59:22.407506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.848 [2024-11-19 10:59:22.407532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.848 [2024-11-19 10:59:22.423276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.848 [2024-11-19 10:59:22.423320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.848 [2024-11-19 10:59:22.433389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.848 [2024-11-19 10:59:22.433415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.848 [2024-11-19 10:59:22.444939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.848 [2024-11-19 10:59:22.444963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.848 [2024-11-19 10:59:22.457640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.848 [2024-11-19 10:59:22.457665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.848 [2024-11-19 10:59:22.467375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.848 [2024-11-19 10:59:22.467402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.106 [2024-11-19 10:59:22.479470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.106 [2024-11-19 10:59:22.479511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.106 [2024-11-19 10:59:22.492827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.106 [2024-11-19 10:59:22.492854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.106 [2024-11-19 10:59:22.506383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.106 [2024-11-19 10:59:22.506410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.106 [2024-11-19 10:59:22.515980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.106 [2024-11-19 10:59:22.516004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.106 [2024-11-19 10:59:22.527884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.106 [2024-11-19 10:59:22.527916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.106 [2024-11-19 10:59:22.544601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.106 [2024-11-19 10:59:22.544626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.106 [2024-11-19 10:59:22.560134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.106 [2024-11-19 10:59:22.560175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.106 [2024-11-19 10:59:22.576263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.106 [2024-11-19 10:59:22.576287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.106 [2024-11-19 10:59:22.591599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.106 [2024-11-19 10:59:22.591626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.106 [2024-11-19 10:59:22.601232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.106 [2024-11-19 10:59:22.601257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.106 [2024-11-19 10:59:22.613037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.106 [2024-11-19 10:59:22.613063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.106 [2024-11-19 10:59:22.623569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.106 [2024-11-19 10:59:22.623597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.106 [2024-11-19 10:59:22.639584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.106 [2024-11-19 10:59:22.639624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.106 [2024-11-19 10:59:22.648808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.106 [2024-11-19 10:59:22.648833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.106 [2024-11-19 10:59:22.662193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.106 [2024-11-19 10:59:22.662218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.106 [2024-11-19 10:59:22.671508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.106 [2024-11-19 10:59:22.671535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.106 [2024-11-19 10:59:22.683388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.106 [2024-11-19 10:59:22.683414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.106 [2024-11-19 10:59:22.699835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.106 [2024-11-19 10:59:22.699860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.106 [2024-11-19 10:59:22.709985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.106 [2024-11-19 10:59:22.710010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.106 [2024-11-19 10:59:22.722232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.106 [2024-11-19 10:59:22.722257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.365 [2024-11-19 10:59:22.733254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.365 [2024-11-19 10:59:22.733279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.365 [2024-11-19 10:59:22.744257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.365 [2024-11-19 10:59:22.744282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.365 [2024-11-19 10:59:22.760635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.365 [2024-11-19 10:59:22.760660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.365 [2024-11-19 10:59:22.775190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.365 [2024-11-19 10:59:22.775223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.365 [2024-11-19 10:59:22.784402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.365 [2024-11-19 10:59:22.784427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.365 [2024-11-19 10:59:22.799002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.365 [2024-11-19 10:59:22.799026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.365 [2024-11-19 10:59:22.808117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.365 [2024-11-19 10:59:22.808142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.365 [2024-11-19 10:59:22.821968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.365 [2024-11-19 10:59:22.821992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.365 [2024-11-19 10:59:22.831617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.365 [2024-11-19 10:59:22.831642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.365 [2024-11-19 10:59:22.843531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.365 [2024-11-19 10:59:22.843557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.365 [2024-11-19 10:59:22.858890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.365 [2024-11-19 10:59:22.858915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.365 [2024-11-19 10:59:22.868549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.365 [2024-11-19 10:59:22.868599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.365 [2024-11-19 10:59:22.884760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.365 [2024-11-19 10:59:22.884784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.365 [2024-11-19 10:59:22.898142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.365 [2024-11-19 10:59:22.898168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.365 [2024-11-19 10:59:22.908269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.365 [2024-11-19 10:59:22.908318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.365 [2024-11-19 10:59:22.922282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.365 [2024-11-19 10:59:22.922329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.365 [2024-11-19 10:59:22.932091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.365 [2024-11-19 10:59:22.932116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.365 [2024-11-19 10:59:22.946834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.365 [2024-11-19 10:59:22.946873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.365 [2024-11-19 10:59:22.956181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.365 [2024-11-19 10:59:22.956205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.365 [2024-11-19 10:59:22.970651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.365 [2024-11-19 10:59:22.970677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.365 [2024-11-19 10:59:22.981092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.365 [2024-11-19 10:59:22.981117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.624 [2024-11-19 10:59:22.992327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.624 [2024-11-19 10:59:22.992351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.624 [2024-11-19 10:59:23.007331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.624 [2024-11-19 10:59:23.007370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.624 [2024-11-19 10:59:23.017227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.624 [2024-11-19 10:59:23.017251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.624 11661.00 IOPS, 91.10 MiB/s [2024-11-19T09:59:23.247Z] [2024-11-19 10:59:23.029001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.624 [2024-11-19 10:59:23.029026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.624 [2024-11-19 10:59:23.042037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.624 [2024-11-19 10:59:23.042077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.624 [2024-11-19 10:59:23.051404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.624 [2024-11-19 10:59:23.051429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.624 [2024-11-19 10:59:23.063375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.624 [2024-11-19 10:59:23.063401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.624 [2024-11-19 10:59:23.078431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.624 [2024-11-19 10:59:23.078458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.624 [2024-11-19 10:59:23.087620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.624 [2024-11-19 10:59:23.087661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.624 [2024-11-19 10:59:23.099422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.624 [2024-11-19 10:59:23.099447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.624 [2024-11-19 10:59:23.109895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.624 [2024-11-19 10:59:23.109919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.624 [2024-11-19 10:59:23.120600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.624 [2024-11-19 10:59:23.120625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.624 [2024-11-19 10:59:23.133566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.624 [2024-11-19 10:59:23.133592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.624 [2024-11-19 10:59:23.143141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.624 [2024-11-19 10:59:23.143165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.624 [2024-11-19 10:59:23.155073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.624 [2024-11-19 10:59:23.155097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.624 [2024-11-19 10:59:23.170828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.624 [2024-11-19 10:59:23.170853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.624 [2024-11-19 10:59:23.180148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.624 [2024-11-19 10:59:23.180186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.624 [2024-11-19 10:59:23.194639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.624 [2024-11-19 10:59:23.194664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.624 [2024-11-19 10:59:23.204329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.624 [2024-11-19 10:59:23.204355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.624 [2024-11-19 10:59:23.218944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.624 [2024-11-19 10:59:23.218968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.624 [2024-11-19 10:59:23.228268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.624 [2024-11-19 10:59:23.228316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.624 [2024-11-19 10:59:23.243561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.624 [2024-11-19 10:59:23.243602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.882 [2024-11-19 10:59:23.253613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.882 [2024-11-19 10:59:23.253654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.882 [2024-11-19 10:59:23.265269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.882 [2024-11-19 10:59:23.265293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.882 [2024-11-19 10:59:23.276172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.882 [2024-11-19 10:59:23.276196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.882 [2024-11-19 10:59:23.291767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.882 [2024-11-19 10:59:23.291806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.882 [2024-11-19 10:59:23.300877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.882 [2024-11-19 10:59:23.300902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.882 [2024-11-19 10:59:23.315157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.883 [2024-11-19 10:59:23.315181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.883 [2024-11-19 10:59:23.324499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.883 [2024-11-19 10:59:23.324526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.883 [2024-11-19 10:59:23.336065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.883 [2024-11-19 10:59:23.336107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.883 [2024-11-19 10:59:23.348780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.883 [2024-11-19 10:59:23.348820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.883 [2024-11-19 10:59:23.363167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.883 [2024-11-19 10:59:23.363208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.883 [2024-11-19 10:59:23.372618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.883 [2024-11-19 10:59:23.372643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.883 [2024-11-19 10:59:23.387032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.883 [2024-11-19 10:59:23.387056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.883 [2024-11-19 10:59:23.396357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.883 [2024-11-19 10:59:23.396383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.883 [2024-11-19 10:59:23.410494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.883 [2024-11-19 10:59:23.410519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.883 [2024-11-19 10:59:23.420342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.883 [2024-11-19 10:59:23.420368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.883 [2024-11-19 10:59:23.436736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.883 [2024-11-19 10:59:23.436761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.883 [2024-11-19 10:59:23.446992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.883 [2024-11-19 10:59:23.447017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.883 [2024-11-19 10:59:23.458525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.883 [2024-11-19 10:59:23.458551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.883 [2024-11-19 10:59:23.468975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.883 [2024-11-19 10:59:23.468999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.883 [2024-11-19 10:59:23.482357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.883 [2024-11-19 10:59:23.482384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.883 [2024-11-19 10:59:23.491767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.883 [2024-11-19 10:59:23.491794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.883 [2024-11-19 10:59:23.503800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.883 [2024-11-19 10:59:23.503825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.142 [2024-11-19 10:59:23.519628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.142 [2024-11-19 10:59:23.519668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.142 [2024-11-19 10:59:23.529407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.142 [2024-11-19 10:59:23.529435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.142 [2024-11-19 10:59:23.541380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.142 [2024-11-19 10:59:23.541407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.142 [2024-11-19 10:59:23.551888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.142 [2024-11-19 10:59:23.551913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.142 [2024-11-19 10:59:23.565653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.142 [2024-11-19 10:59:23.565677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.142 [2024-11-19 10:59:23.575329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.142 [2024-11-19 10:59:23.575355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.142 [2024-11-19 10:59:23.587054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.142 [2024-11-19 10:59:23.587077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.142 [2024-11-19 10:59:23.597623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.142 [2024-11-19 10:59:23.597664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.142 [2024-11-19 10:59:23.608413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.142 [2024-11-19 10:59:23.608440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.142 [2024-11-19 10:59:23.622062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.142 [2024-11-19 10:59:23.622086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.142 [2024-11-19 10:59:23.631387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.142 [2024-11-19 10:59:23.631415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.142 [2024-11-19 10:59:23.643049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.142 [2024-11-19 10:59:23.643073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.142 [2024-11-19 10:59:23.654197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.142 [2024-11-19 10:59:23.654222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.142 [2024-11-19 10:59:23.664880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.142 [2024-11-19 10:59:23.664905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.142 [2024-11-19 10:59:23.676346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.142 [2024-11-19 10:59:23.676372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.142 [2024-11-19 10:59:23.691833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.142 [2024-11-19 10:59:23.691859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.142 [2024-11-19 10:59:23.701464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.142 [2024-11-19 10:59:23.701490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.142 [2024-11-19 10:59:23.713200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.142 [2024-11-19 10:59:23.713224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.142 [2024-11-19 10:59:23.725562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.142 [2024-11-19 10:59:23.725588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.142 [2024-11-19 10:59:23.735077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.142 [2024-11-19 10:59:23.735101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.142 [2024-11-19 10:59:23.747234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.142 [2024-11-19 10:59:23.747259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.142 [2024-11-19 10:59:23.763264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.142 [2024-11-19 10:59:23.763314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.401 [2024-11-19 10:59:23.772861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.401 [2024-11-19 10:59:23.772902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.401 [2024-11-19 10:59:23.786712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.401 [2024-11-19 10:59:23.786738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.401 [2024-11-19 10:59:23.797350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.401 [2024-11-19 10:59:23.797377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.401 [2024-11-19 10:59:23.810855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.401 [2024-11-19 10:59:23.810880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.401 [2024-11-19 10:59:23.819970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.401 [2024-11-19 10:59:23.819995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.401 [2024-11-19 10:59:23.834659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.401 [2024-11-19 10:59:23.834686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.401 [2024-11-19 10:59:23.844381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.401 [2024-11-19 10:59:23.844408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.401 [2024-11-19 10:59:23.858350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.401 [2024-11-19 10:59:23.858377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.401 [2024-11-19 10:59:23.867926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.401 [2024-11-19 10:59:23.867951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.401 [2024-11-19 10:59:23.879994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.401 [2024-11-19 10:59:23.880020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.401 [2024-11-19 10:59:23.895672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.401 [2024-11-19 10:59:23.895730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.401 [2024-11-19 10:59:23.905077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.401 [2024-11-19 10:59:23.905101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.401 [2024-11-19 10:59:23.916874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.401 [2024-11-19 10:59:23.916898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.401 [2024-11-19 10:59:23.929363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.401 [2024-11-19 10:59:23.929390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.401 [2024-11-19 10:59:23.938829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.401 [2024-11-19 10:59:23.938853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.401 [2024-11-19 10:59:23.950630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.401 [2024-11-19 10:59:23.950673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.401 [2024-11-19 10:59:23.961150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.401 [2024-11-19 10:59:23.961175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.401 [2024-11-19 10:59:23.976001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.401 [2024-11-19 10:59:23.976027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.401 [2024-11-19 10:59:23.993491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.401 [2024-11-19 10:59:23.993519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.401 [2024-11-19 10:59:24.003425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.401 [2024-11-19 10:59:24.003452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.401 [2024-11-19 10:59:24.015076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.401 [2024-11-19 10:59:24.015100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.660 11664.20 IOPS, 91.13 MiB/s [2024-11-19T09:59:24.283Z] [2024-11-19 10:59:24.030359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.660 [2024-11-19 10:59:24.030386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.660 [2024-11-19 10:59:24.037729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.660 [2024-11-19 10:59:24.037753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.660 00:31:36.660 Latency(us) 00:31:36.660 [2024-11-19T09:59:24.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:36.660 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:36.660 Nvme1n1 : 5.01 11665.50 91.14 0.00 0.00 10958.06 2888.44 21554.06 00:31:36.660 [2024-11-19T09:59:24.283Z] =================================================================================================================== 00:31:36.660 [2024-11-19T09:59:24.283Z] Total : 11665.50 91.14 0.00 0.00 10958.06 2888.44 21554.06 00:31:36.660 [2024-11-19 10:59:24.045726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.660 [2024-11-19 10:59:24.045749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.660 [2024-11-19 10:59:24.053728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.660 [2024-11-19 10:59:24.053751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.660 [2024-11-19 10:59:24.061736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.660 [2024-11-19 10:59:24.061766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.660 [2024-11-19 10:59:24.069765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.660 [2024-11-19 10:59:24.069824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.660 [2024-11-19 10:59:24.077768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.660 [2024-11-19 10:59:24.077809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.660 [2024-11-19 10:59:24.085771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.660 [2024-11-19 10:59:24.085809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.660 [2024-11-19 10:59:24.093767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.660 [2024-11-19 10:59:24.093807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.660 [2024-11-19 10:59:24.101758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.660 [2024-11-19 10:59:24.101797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.660 [2024-11-19 10:59:24.113796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.661 [2024-11-19 10:59:24.113848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.661 [2024-11-19 10:59:24.121766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.661 [2024-11-19 10:59:24.121806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.661 [2024-11-19 10:59:24.129768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.661 [2024-11-19 10:59:24.129810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.661 [2024-11-19 10:59:24.137773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.661 [2024-11-19 10:59:24.137810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.661 [2024-11-19 10:59:24.145772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.661 [2024-11-19 10:59:24.145812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.661 [2024-11-19 10:59:24.153770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.661 [2024-11-19 10:59:24.153809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.661 [2024-11-19 10:59:24.161768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.661 [2024-11-19 10:59:24.161802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.661 [2024-11-19 10:59:24.169764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.661 [2024-11-19 10:59:24.169803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.661 [2024-11-19 10:59:24.177766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.661 [2024-11-19 10:59:24.177805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.661 [2024-11-19 10:59:24.185747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.661 [2024-11-19 10:59:24.185783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.661 [2024-11-19 10:59:24.193739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.661 [2024-11-19 10:59:24.193758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.661 [2024-11-19 10:59:24.201720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.661 [2024-11-19 10:59:24.201738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.661 [2024-11-19 10:59:24.209723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.661 [2024-11-19 10:59:24.209744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.661 [2024-11-19 10:59:24.217721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.661 [2024-11-19 10:59:24.217741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.661 [2024-11-19 10:59:24.225772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.661 [2024-11-19 10:59:24.225823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.661 [2024-11-19 10:59:24.233775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.661 [2024-11-19 10:59:24.233816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.661 [2024-11-19 10:59:24.241761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.661 [2024-11-19 10:59:24.241793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.661 [2024-11-19 10:59:24.249720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.661 [2024-11-19 10:59:24.249738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.661 [2024-11-19 10:59:24.257720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.661 [2024-11-19 10:59:24.257739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.661 [2024-11-19 10:59:24.265720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.661 [2024-11-19 10:59:24.265739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1495104) - No such process 00:31:36.661 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1495104 00:31:36.661 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:36.661 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.661 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:36.661 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.661 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:36.661 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.661 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:36.966 delay0 00:31:36.966 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.966 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:36.966 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.966 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:36.966 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.966 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:36.966 [2024-11-19 10:59:24.434412] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:45.064 Initializing NVMe Controllers 00:31:45.064 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:45.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:45.064 Initialization complete. Launching workers. 00:31:45.064 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 236, failed: 20804 00:31:45.064 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 20915, failed to submit 125 00:31:45.064 success 20839, unsuccessful 76, failed 0 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:45.064 rmmod nvme_tcp 00:31:45.064 rmmod nvme_fabrics 00:31:45.064 rmmod nvme_keyring 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1493772 ']' 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1493772 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1493772 ']' 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1493772 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1493772 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1493772' 00:31:45.064 killing process with pid 1493772 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1493772 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1493772 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.064 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.441 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:46.441 00:31:46.441 real 0m29.060s 00:31:46.441 user 0m41.509s 00:31:46.441 sys 0m10.079s 00:31:46.441 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:46.441 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:46.441 ************************************ 00:31:46.441 END TEST nvmf_zcopy 00:31:46.441 ************************************ 00:31:46.441 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:46.441 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:46.441 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:46.441 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:46.700 ************************************ 00:31:46.700 START TEST nvmf_nmic 00:31:46.700 ************************************ 00:31:46.700 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:46.700 * Looking for test storage... 00:31:46.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:46.700 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:46.700 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:31:46.700 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:46.700 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:46.700 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:46.700 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:46.700 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:46.700 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:46.700 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:46.700 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:46.700 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:46.700 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:46.700 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:46.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.701 --rc genhtml_branch_coverage=1 00:31:46.701 --rc genhtml_function_coverage=1 00:31:46.701 --rc genhtml_legend=1 00:31:46.701 --rc geninfo_all_blocks=1 00:31:46.701 --rc geninfo_unexecuted_blocks=1 00:31:46.701 00:31:46.701 ' 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:46.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.701 --rc genhtml_branch_coverage=1 00:31:46.701 --rc genhtml_function_coverage=1 00:31:46.701 --rc genhtml_legend=1 00:31:46.701 --rc geninfo_all_blocks=1 00:31:46.701 --rc geninfo_unexecuted_blocks=1 00:31:46.701 00:31:46.701 ' 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:46.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.701 --rc genhtml_branch_coverage=1 00:31:46.701 --rc genhtml_function_coverage=1 00:31:46.701 --rc genhtml_legend=1 00:31:46.701 --rc geninfo_all_blocks=1 00:31:46.701 --rc geninfo_unexecuted_blocks=1 00:31:46.701 00:31:46.701 ' 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:46.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.701 --rc genhtml_branch_coverage=1 00:31:46.701 --rc genhtml_function_coverage=1 00:31:46.701 --rc genhtml_legend=1 00:31:46.701 --rc geninfo_all_blocks=1 00:31:46.701 --rc geninfo_unexecuted_blocks=1 00:31:46.701 00:31:46.701 ' 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:46.701 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:46.702 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:46.702 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:46.702 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:46.702 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:46.702 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:46.702 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:46.702 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:46.702 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:46.702 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:46.702 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:46.702 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:46.702 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:46.702 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:46.702 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:46.702 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:46.702 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.702 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:46.702 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.702 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:46.702 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:46.702 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:46.702 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:49.233 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:49.233 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:49.233 Found net devices under 0000:09:00.0: cvl_0_0 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:49.233 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:49.234 Found net devices under 0000:09:00.1: cvl_0_1 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:49.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:49.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:31:49.234 00:31:49.234 --- 10.0.0.2 ping statistics --- 00:31:49.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.234 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:49.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:49.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:31:49.234 00:31:49.234 --- 10.0.0.1 ping statistics --- 00:31:49.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.234 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1498609 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1498609 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1498609 ']' 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:49.234 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:49.234 [2024-11-19 10:59:36.579001] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:49.234 [2024-11-19 10:59:36.580047] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:31:49.234 [2024-11-19 10:59:36.580110] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:49.234 [2024-11-19 10:59:36.651291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:49.234 [2024-11-19 10:59:36.710715] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:49.234 [2024-11-19 10:59:36.710763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:49.234 [2024-11-19 10:59:36.710791] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:49.234 [2024-11-19 10:59:36.710801] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:49.235 [2024-11-19 10:59:36.710811] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:49.235 [2024-11-19 10:59:36.712430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:49.235 [2024-11-19 10:59:36.712485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:49.235 [2024-11-19 10:59:36.712536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:49.235 [2024-11-19 10:59:36.712539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:49.235 [2024-11-19 10:59:36.801347] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:49.235 [2024-11-19 10:59:36.801598] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:49.235 [2024-11-19 10:59:36.801881] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:49.235 [2024-11-19 10:59:36.802590] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:49.235 [2024-11-19 10:59:36.802862] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:49.235 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:49.235 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:49.235 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:49.235 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:49.235 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:49.235 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:49.493 [2024-11-19 10:59:36.857203] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:49.493 Malloc0 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:49.493 [2024-11-19 10:59:36.929417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:49.493 test case1: single bdev can't be used in multiple subsystems 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.493 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:49.493 [2024-11-19 10:59:36.953140] bdev.c:8180:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:49.493 [2024-11-19 10:59:36.953170] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:49.493 [2024-11-19 10:59:36.953201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.493 request: 00:31:49.493 { 00:31:49.494 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:49.494 "namespace": { 00:31:49.494 "bdev_name": "Malloc0", 00:31:49.494 "no_auto_visible": false 00:31:49.494 }, 00:31:49.494 "method": "nvmf_subsystem_add_ns", 00:31:49.494 "req_id": 1 00:31:49.494 } 00:31:49.494 Got JSON-RPC error response 00:31:49.494 response: 00:31:49.494 { 00:31:49.494 "code": -32602, 00:31:49.494 "message": "Invalid parameters" 00:31:49.494 } 00:31:49.494 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:49.494 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:49.494 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:49.494 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:49.494 Adding namespace failed - expected result. 00:31:49.494 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:49.494 test case2: host connect to nvmf target in multiple paths 00:31:49.494 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:49.494 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.494 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:49.494 [2024-11-19 10:59:36.961230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:49.494 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.494 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:49.751 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:50.009 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:50.009 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:50.009 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:50.009 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:50.009 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:52.006 10:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:52.006 10:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:52.006 10:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:52.006 10:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:52.006 10:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:52.006 10:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:52.006 10:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:52.006 [global] 00:31:52.006 thread=1 00:31:52.006 invalidate=1 00:31:52.006 rw=write 00:31:52.006 time_based=1 00:31:52.006 runtime=1 00:31:52.006 ioengine=libaio 00:31:52.006 direct=1 00:31:52.006 bs=4096 00:31:52.006 iodepth=1 00:31:52.006 norandommap=0 00:31:52.006 numjobs=1 00:31:52.006 00:31:52.006 verify_dump=1 00:31:52.006 verify_backlog=512 00:31:52.006 verify_state_save=0 00:31:52.006 do_verify=1 00:31:52.006 verify=crc32c-intel 00:31:52.006 [job0] 00:31:52.006 filename=/dev/nvme0n1 00:31:52.006 Could not set queue depth (nvme0n1) 00:31:52.006 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:52.006 fio-3.35 00:31:52.006 Starting 1 thread 00:31:53.380 00:31:53.380 job0: (groupid=0, jobs=1): err= 0: pid=1499006: Tue Nov 19 10:59:40 2024 00:31:53.380 read: IOPS=518, BW=2072KiB/s (2122kB/s)(2116KiB/1021msec) 00:31:53.380 slat (nsec): min=6692, max=26750, avg=8502.05, stdev=2265.21 00:31:53.380 clat (usec): min=207, max=42371, avg=1546.94, stdev=7249.42 00:31:53.380 lat (usec): min=220, max=42386, avg=1555.44, stdev=7250.81 00:31:53.380 clat percentiles (usec): 00:31:53.380 | 1.00th=[ 215], 5.00th=[ 219], 10.00th=[ 219], 20.00th=[ 221], 00:31:53.380 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 227], 60.00th=[ 229], 00:31:53.380 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 241], 95.00th=[ 249], 00:31:53.380 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:53.380 | 99.99th=[42206] 00:31:53.380 write: IOPS=1002, BW=4012KiB/s (4108kB/s)(4096KiB/1021msec); 0 zone resets 00:31:53.380 slat (nsec): min=8053, max=41351, avg=15142.18, stdev=5736.54 00:31:53.380 clat (usec): min=146, max=904, avg=172.77, stdev=31.54 00:31:53.380 lat (usec): min=156, max=942, avg=187.91, stdev=34.52 00:31:53.380 clat percentiles (usec): 00:31:53.380 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 157], 00:31:53.380 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 176], 00:31:53.380 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 202], 00:31:53.380 | 99.00th=[ 243], 99.50th=[ 285], 99.90th=[ 469], 99.95th=[ 906], 00:31:53.380 | 99.99th=[ 906] 00:31:53.380 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:31:53.380 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:53.380 lat (usec) : 250=98.07%, 500=0.77%, 1000=0.06% 00:31:53.380 lat (msec) : 50=1.09% 00:31:53.381 cpu : usr=1.27%, sys=2.65%, ctx=1553, majf=0, minf=1 00:31:53.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:53.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.381 issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:53.381 00:31:53.381 Run status group 0 (all jobs): 00:31:53.381 READ: bw=2072KiB/s (2122kB/s), 2072KiB/s-2072KiB/s (2122kB/s-2122kB/s), io=2116KiB (2167kB), run=1021-1021msec 00:31:53.381 WRITE: bw=4012KiB/s (4108kB/s), 4012KiB/s-4012KiB/s (4108kB/s-4108kB/s), io=4096KiB (4194kB), run=1021-1021msec 00:31:53.381 00:31:53.381 Disk stats (read/write): 00:31:53.381 nvme0n1: ios=576/1024, merge=0/0, ticks=910/165, in_queue=1075, util=95.49% 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:53.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:53.381 rmmod nvme_tcp 00:31:53.381 rmmod nvme_fabrics 00:31:53.381 rmmod nvme_keyring 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1498609 ']' 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1498609 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1498609 ']' 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1498609 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:53.381 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1498609 00:31:53.639 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:53.639 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:53.639 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1498609' 00:31:53.639 killing process with pid 1498609 00:31:53.639 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1498609 00:31:53.639 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1498609 00:31:53.899 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:53.899 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:53.899 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:53.899 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:53.899 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:53.899 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:53.899 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:53.899 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:53.899 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:53.899 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.899 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:53.899 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.806 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:55.806 00:31:55.806 real 0m9.241s 00:31:55.806 user 0m17.232s 00:31:55.806 sys 0m3.417s 00:31:55.806 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:55.806 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:55.806 ************************************ 00:31:55.806 END TEST nvmf_nmic 00:31:55.806 ************************************ 00:31:55.806 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:55.806 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:55.806 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:55.806 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:55.806 ************************************ 00:31:55.806 START TEST nvmf_fio_target 00:31:55.806 ************************************ 00:31:55.806 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:55.806 * Looking for test storage... 00:31:55.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:55.806 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:55.806 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:31:55.806 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:56.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.066 --rc genhtml_branch_coverage=1 00:31:56.066 --rc genhtml_function_coverage=1 00:31:56.066 --rc genhtml_legend=1 00:31:56.066 --rc geninfo_all_blocks=1 00:31:56.066 --rc geninfo_unexecuted_blocks=1 00:31:56.066 00:31:56.066 ' 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:56.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.066 --rc genhtml_branch_coverage=1 00:31:56.066 --rc genhtml_function_coverage=1 00:31:56.066 --rc genhtml_legend=1 00:31:56.066 --rc geninfo_all_blocks=1 00:31:56.066 --rc geninfo_unexecuted_blocks=1 00:31:56.066 00:31:56.066 ' 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:56.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.066 --rc genhtml_branch_coverage=1 00:31:56.066 --rc genhtml_function_coverage=1 00:31:56.066 --rc genhtml_legend=1 00:31:56.066 --rc geninfo_all_blocks=1 00:31:56.066 --rc geninfo_unexecuted_blocks=1 00:31:56.066 00:31:56.066 ' 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:56.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.066 --rc genhtml_branch_coverage=1 00:31:56.066 --rc genhtml_function_coverage=1 00:31:56.066 --rc genhtml_legend=1 00:31:56.066 --rc geninfo_all_blocks=1 00:31:56.066 --rc geninfo_unexecuted_blocks=1 00:31:56.066 00:31:56.066 ' 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:56.066 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:56.067 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:57.972 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:57.972 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:57.972 Found net devices under 0000:09:00.0: cvl_0_0 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:57.972 Found net devices under 0000:09:00.1: cvl_0_1 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:57.972 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:57.973 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:57.973 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:57.973 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:57.973 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:57.973 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:58.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:58.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:31:58.232 00:31:58.232 --- 10.0.0.2 ping statistics --- 00:31:58.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.232 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:58.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:58.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:31:58.232 00:31:58.232 --- 10.0.0.1 ping statistics --- 00:31:58.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.232 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1501205 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1501205 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1501205 ']' 00:31:58.232 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:58.233 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:58.233 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:58.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:58.233 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:58.233 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:58.233 [2024-11-19 10:59:45.805343] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:58.233 [2024-11-19 10:59:45.806348] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:31:58.233 [2024-11-19 10:59:45.806408] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:58.491 [2024-11-19 10:59:45.875209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:58.491 [2024-11-19 10:59:45.931469] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:58.491 [2024-11-19 10:59:45.931521] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:58.491 [2024-11-19 10:59:45.931548] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:58.491 [2024-11-19 10:59:45.931560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:58.491 [2024-11-19 10:59:45.931569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:58.491 [2024-11-19 10:59:45.933172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:58.491 [2024-11-19 10:59:45.933238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:58.491 [2024-11-19 10:59:45.933287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:58.491 [2024-11-19 10:59:45.933291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.491 [2024-11-19 10:59:46.018300] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:58.491 [2024-11-19 10:59:46.018543] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:58.491 [2024-11-19 10:59:46.018789] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:58.491 [2024-11-19 10:59:46.019399] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:58.491 [2024-11-19 10:59:46.019652] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:58.491 10:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:58.491 10:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:58.491 10:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:58.491 10:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:58.491 10:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:58.749 10:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:58.749 10:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:58.749 [2024-11-19 10:59:46.361982] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:59.007 10:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:59.265 10:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:59.265 10:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:59.523 10:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:59.523 10:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:59.781 10:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:59.781 10:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:00.040 10:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:00.040 10:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:00.605 10:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:00.605 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:00.605 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:01.172 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:01.172 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:01.430 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:01.430 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:01.688 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:01.946 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:01.946 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:02.205 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:02.205 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:02.463 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:02.721 [2024-11-19 10:59:50.262209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:02.721 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:02.979 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:03.237 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:03.496 10:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:03.496 10:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:32:03.496 10:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:03.496 10:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:32:03.496 10:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:32:03.496 10:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:32:06.024 10:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:06.024 10:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:06.024 10:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:06.024 10:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:32:06.024 10:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:06.024 10:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:32:06.024 10:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:06.024 [global] 00:32:06.024 thread=1 00:32:06.024 invalidate=1 00:32:06.024 rw=write 00:32:06.024 time_based=1 00:32:06.024 runtime=1 00:32:06.024 ioengine=libaio 00:32:06.024 direct=1 00:32:06.024 bs=4096 00:32:06.024 iodepth=1 00:32:06.024 norandommap=0 00:32:06.024 numjobs=1 00:32:06.024 00:32:06.024 verify_dump=1 00:32:06.024 verify_backlog=512 00:32:06.024 verify_state_save=0 00:32:06.024 do_verify=1 00:32:06.024 verify=crc32c-intel 00:32:06.024 [job0] 00:32:06.024 filename=/dev/nvme0n1 00:32:06.024 [job1] 00:32:06.024 filename=/dev/nvme0n2 00:32:06.024 [job2] 00:32:06.024 filename=/dev/nvme0n3 00:32:06.024 [job3] 00:32:06.024 filename=/dev/nvme0n4 00:32:06.024 Could not set queue depth (nvme0n1) 00:32:06.024 Could not set queue depth (nvme0n2) 00:32:06.024 Could not set queue depth (nvme0n3) 00:32:06.024 Could not set queue depth (nvme0n4) 00:32:06.024 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:06.024 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:06.024 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:06.024 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:06.024 fio-3.35 00:32:06.024 Starting 4 threads 00:32:06.958 00:32:06.958 job0: (groupid=0, jobs=1): err= 0: pid=1502151: Tue Nov 19 10:59:54 2024 00:32:06.958 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:32:06.958 slat (nsec): min=5968, max=21214, avg=7033.39, stdev=1112.06 00:32:06.958 clat (usec): min=181, max=701, avg=260.31, stdev=55.75 00:32:06.958 lat (usec): min=188, max=721, avg=267.35, stdev=56.30 00:32:06.958 clat percentiles (usec): 00:32:06.958 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:32:06.958 | 30.00th=[ 221], 40.00th=[ 235], 50.00th=[ 258], 60.00th=[ 273], 00:32:06.958 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 326], 00:32:06.958 | 99.00th=[ 515], 99.50th=[ 523], 99.90th=[ 553], 99.95th=[ 586], 00:32:06.958 | 99.99th=[ 701] 00:32:06.958 write: IOPS=2165, BW=8663KiB/s (8871kB/s)(8672KiB/1001msec); 0 zone resets 00:32:06.958 slat (nsec): min=8433, max=46384, avg=9708.86, stdev=1681.25 00:32:06.958 clat (usec): min=132, max=458, avg=194.29, stdev=51.32 00:32:06.958 lat (usec): min=141, max=467, avg=204.00, stdev=51.52 00:32:06.958 clat percentiles (usec): 00:32:06.958 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 153], 00:32:06.958 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 172], 60.00th=[ 190], 00:32:06.958 | 70.00th=[ 217], 80.00th=[ 253], 90.00th=[ 269], 95.00th=[ 281], 00:32:06.958 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 412], 99.95th=[ 420], 00:32:06.958 | 99.99th=[ 461] 00:32:06.958 bw ( KiB/s): min= 9984, max= 9984, per=47.87%, avg=9984.00, stdev= 0.00, samples=1 00:32:06.958 iops : min= 2496, max= 2496, avg=2496.00, stdev= 0.00, samples=1 00:32:06.958 lat (usec) : 250=63.12%, 500=36.10%, 750=0.78% 00:32:06.958 cpu : usr=3.10%, sys=4.40%, ctx=4217, majf=0, minf=1 00:32:06.958 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:06.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.958 issued rwts: total=2048,2168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.958 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:06.958 job1: (groupid=0, jobs=1): err= 0: pid=1502152: Tue Nov 19 10:59:54 2024 00:32:06.958 read: IOPS=20, BW=83.6KiB/s (85.6kB/s)(84.0KiB/1005msec) 00:32:06.958 slat (nsec): min=7502, max=34818, avg=25733.71, stdev=9803.38 00:32:06.958 clat (usec): min=40492, max=41973, avg=40988.87, stdev=249.20 00:32:06.958 lat (usec): min=40500, max=41989, avg=41014.60, stdev=248.56 00:32:06.958 clat percentiles (usec): 00:32:06.958 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:06.958 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:06.958 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:06.958 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:06.958 | 99.99th=[42206] 00:32:06.958 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:32:06.958 slat (nsec): min=6567, max=60131, avg=13790.44, stdev=6973.10 00:32:06.958 clat (usec): min=161, max=462, avg=262.59, stdev=43.77 00:32:06.958 lat (usec): min=170, max=473, avg=276.38, stdev=42.60 00:32:06.958 clat percentiles (usec): 00:32:06.959 | 1.00th=[ 182], 5.00th=[ 200], 10.00th=[ 217], 20.00th=[ 235], 00:32:06.959 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 262], 00:32:06.959 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 322], 95.00th=[ 359], 00:32:06.959 | 99.00th=[ 392], 99.50th=[ 400], 99.90th=[ 461], 99.95th=[ 461], 00:32:06.959 | 99.99th=[ 461] 00:32:06.959 bw ( KiB/s): min= 4096, max= 4096, per=19.64%, avg=4096.00, stdev= 0.00, samples=1 00:32:06.959 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:06.959 lat (usec) : 250=40.34%, 500=55.72% 00:32:06.959 lat (msec) : 50=3.94% 00:32:06.959 cpu : usr=0.40%, sys=0.60%, ctx=534, majf=0, minf=1 00:32:06.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:06.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.959 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:06.959 job2: (groupid=0, jobs=1): err= 0: pid=1502153: Tue Nov 19 10:59:54 2024 00:32:06.959 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:32:06.959 slat (nsec): min=8421, max=34808, avg=25606.00, stdev=9630.51 00:32:06.959 clat (usec): min=520, max=41491, avg=39140.36, stdev=8626.78 00:32:06.959 lat (usec): min=539, max=41507, avg=39165.97, stdev=8628.16 00:32:06.959 clat percentiles (usec): 00:32:06.959 | 1.00th=[ 523], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:06.959 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:06.959 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:06.959 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:32:06.959 | 99.99th=[41681] 00:32:06.959 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:32:06.959 slat (nsec): min=6381, max=54955, avg=13592.31, stdev=6448.38 00:32:06.959 clat (usec): min=180, max=400, avg=260.42, stdev=35.32 00:32:06.959 lat (usec): min=186, max=409, avg=274.01, stdev=34.18 00:32:06.959 clat percentiles (usec): 00:32:06.959 | 1.00th=[ 190], 5.00th=[ 206], 10.00th=[ 225], 20.00th=[ 237], 00:32:06.959 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 265], 00:32:06.959 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 318], 95.00th=[ 326], 00:32:06.959 | 99.00th=[ 379], 99.50th=[ 383], 99.90th=[ 400], 99.95th=[ 400], 00:32:06.959 | 99.99th=[ 400] 00:32:06.959 bw ( KiB/s): min= 4096, max= 4096, per=19.64%, avg=4096.00, stdev= 0.00, samples=1 00:32:06.959 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:06.959 lat (usec) : 250=38.01%, 500=57.87%, 750=0.19% 00:32:06.959 lat (msec) : 50=3.93% 00:32:06.959 cpu : usr=0.30%, sys=0.70%, ctx=534, majf=0, minf=1 00:32:06.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:06.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.959 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:06.959 job3: (groupid=0, jobs=1): err= 0: pid=1502154: Tue Nov 19 10:59:54 2024 00:32:06.959 read: IOPS=1762, BW=7049KiB/s (7218kB/s)(7056KiB/1001msec) 00:32:06.959 slat (nsec): min=4497, max=68438, avg=16062.43, stdev=9927.02 00:32:06.959 clat (usec): min=180, max=40320, avg=305.48, stdev=955.07 00:32:06.959 lat (usec): min=186, max=40325, avg=321.54, stdev=955.15 00:32:06.959 clat percentiles (usec): 00:32:06.959 | 1.00th=[ 206], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 241], 00:32:06.959 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 265], 60.00th=[ 277], 00:32:06.959 | 70.00th=[ 293], 80.00th=[ 318], 90.00th=[ 359], 95.00th=[ 408], 00:32:06.959 | 99.00th=[ 506], 99.50th=[ 529], 99.90th=[ 578], 99.95th=[40109], 00:32:06.959 | 99.99th=[40109] 00:32:06.959 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:32:06.959 slat (nsec): min=5482, max=44288, avg=12287.63, stdev=4874.82 00:32:06.959 clat (usec): min=140, max=424, avg=191.60, stdev=32.99 00:32:06.959 lat (usec): min=147, max=440, avg=203.89, stdev=32.73 00:32:06.959 clat percentiles (usec): 00:32:06.959 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 165], 20.00th=[ 172], 00:32:06.959 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 188], 00:32:06.959 | 70.00th=[ 198], 80.00th=[ 212], 90.00th=[ 239], 95.00th=[ 258], 00:32:06.959 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 383], 99.95th=[ 392], 00:32:06.959 | 99.99th=[ 424] 00:32:06.959 bw ( KiB/s): min= 8192, max= 8192, per=39.28%, avg=8192.00, stdev= 0.00, samples=1 00:32:06.959 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:06.959 lat (usec) : 250=66.84%, 500=32.63%, 750=0.50% 00:32:06.959 lat (msec) : 50=0.03% 00:32:06.959 cpu : usr=3.00%, sys=5.70%, ctx=3812, majf=0, minf=1 00:32:06.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:06.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.959 issued rwts: total=1764,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:06.959 00:32:06.959 Run status group 0 (all jobs): 00:32:06.959 READ: bw=15.0MiB/s (15.7MB/s), 83.6KiB/s-8184KiB/s (85.6kB/s-8380kB/s), io=15.1MiB (15.8MB), run=1001-1005msec 00:32:06.959 WRITE: bw=20.4MiB/s (21.4MB/s), 2038KiB/s-8663KiB/s (2087kB/s-8871kB/s), io=20.5MiB (21.5MB), run=1001-1005msec 00:32:06.959 00:32:06.959 Disk stats (read/write): 00:32:06.959 nvme0n1: ios=1688/2048, merge=0/0, ticks=707/377, in_queue=1084, util=97.60% 00:32:06.959 nvme0n2: ios=68/512, merge=0/0, ticks=901/130, in_queue=1031, util=97.76% 00:32:06.959 nvme0n3: ios=17/512, merge=0/0, ticks=697/132, in_queue=829, util=88.89% 00:32:06.959 nvme0n4: ios=1536/1613, merge=0/0, ticks=462/301, in_queue=763, util=89.64% 00:32:06.959 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:06.959 [global] 00:32:06.959 thread=1 00:32:06.959 invalidate=1 00:32:06.959 rw=randwrite 00:32:06.959 time_based=1 00:32:06.959 runtime=1 00:32:06.959 ioengine=libaio 00:32:06.959 direct=1 00:32:06.959 bs=4096 00:32:06.959 iodepth=1 00:32:06.959 norandommap=0 00:32:06.959 numjobs=1 00:32:06.959 00:32:06.959 verify_dump=1 00:32:06.959 verify_backlog=512 00:32:06.959 verify_state_save=0 00:32:06.959 do_verify=1 00:32:06.959 verify=crc32c-intel 00:32:06.959 [job0] 00:32:06.959 filename=/dev/nvme0n1 00:32:06.959 [job1] 00:32:06.959 filename=/dev/nvme0n2 00:32:06.959 [job2] 00:32:06.959 filename=/dev/nvme0n3 00:32:06.959 [job3] 00:32:06.959 filename=/dev/nvme0n4 00:32:06.959 Could not set queue depth (nvme0n1) 00:32:06.959 Could not set queue depth (nvme0n2) 00:32:06.959 Could not set queue depth (nvme0n3) 00:32:06.959 Could not set queue depth (nvme0n4) 00:32:07.217 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:07.217 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:07.217 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:07.217 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:07.217 fio-3.35 00:32:07.217 Starting 4 threads 00:32:08.590 00:32:08.590 job0: (groupid=0, jobs=1): err= 0: pid=1502474: Tue Nov 19 10:59:55 2024 00:32:08.590 read: IOPS=1457, BW=5830KiB/s (5970kB/s)(5836KiB/1001msec) 00:32:08.590 slat (nsec): min=4381, max=68602, avg=17594.04, stdev=10179.95 00:32:08.590 clat (usec): min=197, max=42150, avg=402.84, stdev=1097.46 00:32:08.590 lat (usec): min=209, max=42156, avg=420.43, stdev=1097.30 00:32:08.590 clat percentiles (usec): 00:32:08.590 | 1.00th=[ 221], 5.00th=[ 235], 10.00th=[ 247], 20.00th=[ 285], 00:32:08.590 | 30.00th=[ 310], 40.00th=[ 343], 50.00th=[ 379], 60.00th=[ 400], 00:32:08.591 | 70.00th=[ 433], 80.00th=[ 469], 90.00th=[ 498], 95.00th=[ 506], 00:32:08.591 | 99.00th=[ 553], 99.50th=[ 562], 99.90th=[ 586], 99.95th=[42206], 00:32:08.591 | 99.99th=[42206] 00:32:08.591 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:32:08.591 slat (nsec): min=5838, max=53851, avg=13997.59, stdev=5014.05 00:32:08.591 clat (usec): min=143, max=542, avg=226.70, stdev=60.11 00:32:08.591 lat (usec): min=159, max=553, avg=240.70, stdev=58.44 00:32:08.591 clat percentiles (usec): 00:32:08.591 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 167], 00:32:08.591 | 30.00th=[ 196], 40.00th=[ 208], 50.00th=[ 223], 60.00th=[ 237], 00:32:08.591 | 70.00th=[ 245], 80.00th=[ 258], 90.00th=[ 310], 95.00th=[ 347], 00:32:08.591 | 99.00th=[ 412], 99.50th=[ 461], 99.90th=[ 529], 99.95th=[ 545], 00:32:08.591 | 99.99th=[ 545] 00:32:08.591 bw ( KiB/s): min= 7584, max= 7584, per=34.67%, avg=7584.00, stdev= 0.00, samples=1 00:32:08.591 iops : min= 1896, max= 1896, avg=1896.00, stdev= 0.00, samples=1 00:32:08.591 lat (usec) : 250=43.57%, 500=52.22%, 750=4.17% 00:32:08.591 lat (msec) : 50=0.03% 00:32:08.591 cpu : usr=3.10%, sys=4.30%, ctx=2997, majf=0, minf=1 00:32:08.591 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:08.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.591 issued rwts: total=1459,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.591 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:08.591 job1: (groupid=0, jobs=1): err= 0: pid=1502496: Tue Nov 19 10:59:55 2024 00:32:08.591 read: IOPS=1825, BW=7301KiB/s (7476kB/s)(7308KiB/1001msec) 00:32:08.591 slat (nsec): min=5381, max=45845, avg=11524.02, stdev=5476.30 00:32:08.591 clat (usec): min=224, max=697, avg=288.09, stdev=34.67 00:32:08.591 lat (usec): min=229, max=712, avg=299.61, stdev=37.43 00:32:08.591 clat percentiles (usec): 00:32:08.591 | 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 262], 00:32:08.591 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 297], 00:32:08.591 | 70.00th=[ 302], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 322], 00:32:08.591 | 99.00th=[ 433], 99.50th=[ 510], 99.90th=[ 603], 99.95th=[ 701], 00:32:08.591 | 99.99th=[ 701] 00:32:08.591 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:32:08.591 slat (nsec): min=6844, max=47690, avg=13614.40, stdev=6582.77 00:32:08.591 clat (usec): min=156, max=288, avg=198.21, stdev=19.15 00:32:08.591 lat (usec): min=163, max=296, avg=211.83, stdev=23.86 00:32:08.591 clat percentiles (usec): 00:32:08.591 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 182], 00:32:08.591 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 204], 00:32:08.591 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 223], 95.00th=[ 229], 00:32:08.591 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 265], 99.95th=[ 269], 00:32:08.591 | 99.99th=[ 289] 00:32:08.591 bw ( KiB/s): min= 8192, max= 8192, per=37.45%, avg=8192.00, stdev= 0.00, samples=1 00:32:08.591 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:08.591 lat (usec) : 250=57.24%, 500=42.50%, 750=0.26% 00:32:08.591 cpu : usr=3.90%, sys=6.50%, ctx=3876, majf=0, minf=1 00:32:08.591 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:08.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.591 issued rwts: total=1827,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.591 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:08.591 job2: (groupid=0, jobs=1): err= 0: pid=1502507: Tue Nov 19 10:59:55 2024 00:32:08.591 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:32:08.591 slat (nsec): min=6483, max=73557, avg=16755.63, stdev=5495.09 00:32:08.591 clat (usec): min=220, max=41223, avg=589.43, stdev=3106.11 00:32:08.591 lat (usec): min=227, max=41231, avg=606.18, stdev=3105.84 00:32:08.591 clat percentiles (usec): 00:32:08.591 | 1.00th=[ 245], 5.00th=[ 273], 10.00th=[ 297], 20.00th=[ 318], 00:32:08.591 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 359], 00:32:08.591 | 70.00th=[ 371], 80.00th=[ 388], 90.00th=[ 404], 95.00th=[ 437], 00:32:08.591 | 99.00th=[ 611], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:08.591 | 99.99th=[41157] 00:32:08.591 write: IOPS=1404, BW=5618KiB/s (5753kB/s)(5624KiB/1001msec); 0 zone resets 00:32:08.591 slat (nsec): min=7865, max=44900, avg=15455.30, stdev=6647.45 00:32:08.591 clat (usec): min=166, max=507, avg=243.35, stdev=54.74 00:32:08.591 lat (usec): min=178, max=519, avg=258.81, stdev=54.13 00:32:08.591 clat percentiles (usec): 00:32:08.591 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 186], 20.00th=[ 204], 00:32:08.591 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 233], 60.00th=[ 243], 00:32:08.591 | 70.00th=[ 255], 80.00th=[ 277], 90.00th=[ 330], 95.00th=[ 351], 00:32:08.591 | 99.00th=[ 420], 99.50th=[ 461], 99.90th=[ 465], 99.95th=[ 506], 00:32:08.591 | 99.99th=[ 506] 00:32:08.591 bw ( KiB/s): min= 4096, max= 4096, per=18.72%, avg=4096.00, stdev= 0.00, samples=1 00:32:08.591 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:08.591 lat (usec) : 250=39.01%, 500=60.12%, 750=0.53%, 1000=0.04% 00:32:08.591 lat (msec) : 2=0.04%, 50=0.25% 00:32:08.591 cpu : usr=2.60%, sys=5.40%, ctx=2432, majf=0, minf=1 00:32:08.591 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:08.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.591 issued rwts: total=1024,1406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.591 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:08.591 job3: (groupid=0, jobs=1): err= 0: pid=1502508: Tue Nov 19 10:59:55 2024 00:32:08.591 read: IOPS=74, BW=298KiB/s (305kB/s)(300KiB/1006msec) 00:32:08.591 slat (nsec): min=6861, max=54754, avg=19663.08, stdev=9317.20 00:32:08.591 clat (usec): min=296, max=41052, avg=11808.46, stdev=18298.14 00:32:08.591 lat (usec): min=315, max=41066, avg=11828.12, stdev=18299.70 00:32:08.591 clat percentiles (usec): 00:32:08.591 | 1.00th=[ 297], 5.00th=[ 318], 10.00th=[ 363], 20.00th=[ 404], 00:32:08.591 | 30.00th=[ 449], 40.00th=[ 482], 50.00th=[ 545], 60.00th=[ 586], 00:32:08.591 | 70.00th=[ 652], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:08.591 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:08.591 | 99.99th=[41157] 00:32:08.591 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:32:08.591 slat (nsec): min=5887, max=28696, avg=7214.83, stdev=2088.00 00:32:08.591 clat (usec): min=158, max=384, avg=220.56, stdev=28.83 00:32:08.591 lat (usec): min=165, max=390, avg=227.78, stdev=28.84 00:32:08.591 clat percentiles (usec): 00:32:08.591 | 1.00th=[ 161], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:32:08.591 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 233], 60.00th=[ 245], 00:32:08.591 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 247], 95.00th=[ 251], 00:32:08.591 | 99.00th=[ 260], 99.50th=[ 262], 99.90th=[ 383], 99.95th=[ 383], 00:32:08.591 | 99.99th=[ 383] 00:32:08.591 bw ( KiB/s): min= 4096, max= 4096, per=18.72%, avg=4096.00, stdev= 0.00, samples=1 00:32:08.591 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:08.591 lat (usec) : 250=82.96%, 500=9.71%, 750=3.75% 00:32:08.591 lat (msec) : 50=3.58% 00:32:08.591 cpu : usr=0.20%, sys=0.50%, ctx=587, majf=0, minf=2 00:32:08.591 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:08.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.591 issued rwts: total=75,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.591 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:08.591 00:32:08.591 Run status group 0 (all jobs): 00:32:08.591 READ: bw=17.0MiB/s (17.9MB/s), 298KiB/s-7301KiB/s (305kB/s-7476kB/s), io=17.1MiB (18.0MB), run=1001-1006msec 00:32:08.591 WRITE: bw=21.4MiB/s (22.4MB/s), 2036KiB/s-8184KiB/s (2085kB/s-8380kB/s), io=21.5MiB (22.5MB), run=1001-1006msec 00:32:08.591 00:32:08.591 Disk stats (read/write): 00:32:08.591 nvme0n1: ios=1083/1536, merge=0/0, ticks=695/332, in_queue=1027, util=88.58% 00:32:08.591 nvme0n2: ios=1560/1740, merge=0/0, ticks=1392/324, in_queue=1716, util=97.24% 00:32:08.591 nvme0n3: ios=877/1024, merge=0/0, ticks=639/254, in_queue=893, util=92.85% 00:32:08.591 nvme0n4: ios=128/512, merge=0/0, ticks=797/107, in_queue=904, util=93.85% 00:32:08.591 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:08.591 [global] 00:32:08.591 thread=1 00:32:08.591 invalidate=1 00:32:08.591 rw=write 00:32:08.591 time_based=1 00:32:08.591 runtime=1 00:32:08.591 ioengine=libaio 00:32:08.591 direct=1 00:32:08.592 bs=4096 00:32:08.592 iodepth=128 00:32:08.592 norandommap=0 00:32:08.592 numjobs=1 00:32:08.592 00:32:08.592 verify_dump=1 00:32:08.592 verify_backlog=512 00:32:08.592 verify_state_save=0 00:32:08.592 do_verify=1 00:32:08.592 verify=crc32c-intel 00:32:08.592 [job0] 00:32:08.592 filename=/dev/nvme0n1 00:32:08.592 [job1] 00:32:08.592 filename=/dev/nvme0n2 00:32:08.592 [job2] 00:32:08.592 filename=/dev/nvme0n3 00:32:08.592 [job3] 00:32:08.592 filename=/dev/nvme0n4 00:32:08.592 Could not set queue depth (nvme0n1) 00:32:08.592 Could not set queue depth (nvme0n2) 00:32:08.592 Could not set queue depth (nvme0n3) 00:32:08.592 Could not set queue depth (nvme0n4) 00:32:08.592 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:08.592 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:08.592 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:08.592 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:08.592 fio-3.35 00:32:08.592 Starting 4 threads 00:32:09.967 00:32:09.967 job0: (groupid=0, jobs=1): err= 0: pid=1502731: Tue Nov 19 10:59:57 2024 00:32:09.967 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:32:09.967 slat (usec): min=2, max=10927, avg=109.78, stdev=855.58 00:32:09.967 clat (usec): min=3662, max=35118, avg=14203.39, stdev=4684.18 00:32:09.967 lat (usec): min=3668, max=35133, avg=14313.17, stdev=4756.20 00:32:09.967 clat percentiles (usec): 00:32:09.967 | 1.00th=[ 8586], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[11469], 00:32:09.967 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12649], 60.00th=[13173], 00:32:09.967 | 70.00th=[15533], 80.00th=[16057], 90.00th=[21365], 95.00th=[24249], 00:32:09.967 | 99.00th=[31327], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:32:09.967 | 99.99th=[34866] 00:32:09.967 write: IOPS=4952, BW=19.3MiB/s (20.3MB/s)(19.5MiB/1007msec); 0 zone resets 00:32:09.967 slat (usec): min=3, max=10950, avg=85.45, stdev=604.54 00:32:09.967 clat (usec): min=227, max=36566, avg=12495.10, stdev=5320.85 00:32:09.967 lat (usec): min=358, max=36679, avg=12580.55, stdev=5371.65 00:32:09.967 clat percentiles (usec): 00:32:09.967 | 1.00th=[ 1926], 5.00th=[ 4621], 10.00th=[ 7177], 20.00th=[ 8979], 00:32:09.967 | 30.00th=[10028], 40.00th=[11600], 50.00th=[12518], 60.00th=[12780], 00:32:09.967 | 70.00th=[13042], 80.00th=[15664], 90.00th=[16712], 95.00th=[22414], 00:32:09.967 | 99.00th=[33817], 99.50th=[35390], 99.90th=[36439], 99.95th=[36439], 00:32:09.967 | 99.99th=[36439] 00:32:09.967 bw ( KiB/s): min=16944, max=21928, per=26.15%, avg=19436.00, stdev=3524.22, samples=2 00:32:09.967 iops : min= 4236, max= 5482, avg=4859.00, stdev=881.06, samples=2 00:32:09.967 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:32:09.967 lat (msec) : 2=0.63%, 4=1.56%, 10=16.92%, 20=72.46%, 50=8.38% 00:32:09.967 cpu : usr=3.58%, sys=5.57%, ctx=394, majf=0, minf=1 00:32:09.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:32:09.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:09.967 issued rwts: total=4608,4987,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:09.967 job1: (groupid=0, jobs=1): err= 0: pid=1502732: Tue Nov 19 10:59:57 2024 00:32:09.967 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:32:09.967 slat (usec): min=2, max=5808, avg=93.74, stdev=557.89 00:32:09.967 clat (usec): min=6951, max=20842, avg=11766.62, stdev=2034.97 00:32:09.967 lat (usec): min=6960, max=20846, avg=11860.36, stdev=2046.52 00:32:09.967 clat percentiles (usec): 00:32:09.967 | 1.00th=[ 7898], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9765], 00:32:09.967 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11863], 60.00th=[12518], 00:32:09.967 | 70.00th=[12780], 80.00th=[13435], 90.00th=[14353], 95.00th=[15139], 00:32:09.967 | 99.00th=[16712], 99.50th=[17957], 99.90th=[20841], 99.95th=[20841], 00:32:09.967 | 99.99th=[20841] 00:32:09.967 write: IOPS=5019, BW=19.6MiB/s (20.6MB/s)(19.7MiB/1004msec); 0 zone resets 00:32:09.967 slat (usec): min=3, max=22340, avg=107.24, stdev=726.96 00:32:09.967 clat (usec): min=646, max=60013, avg=13746.06, stdev=7064.69 00:32:09.967 lat (usec): min=5650, max=60019, avg=13853.31, stdev=7113.38 00:32:09.967 clat percentiles (usec): 00:32:09.967 | 1.00th=[ 6128], 5.00th=[ 9503], 10.00th=[10552], 20.00th=[11207], 00:32:09.967 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:32:09.967 | 70.00th=[12518], 80.00th=[14091], 90.00th=[17957], 95.00th=[20317], 00:32:09.967 | 99.00th=[51119], 99.50th=[60031], 99.90th=[60031], 99.95th=[60031], 00:32:09.967 | 99.99th=[60031] 00:32:09.967 bw ( KiB/s): min=16384, max=22912, per=26.43%, avg=19648.00, stdev=4615.99, samples=2 00:32:09.967 iops : min= 4096, max= 5728, avg=4912.00, stdev=1154.00, samples=2 00:32:09.967 lat (usec) : 750=0.01% 00:32:09.967 lat (msec) : 10=13.58%, 20=83.09%, 50=2.66%, 100=0.65% 00:32:09.967 cpu : usr=4.49%, sys=5.38%, ctx=538, majf=0, minf=1 00:32:09.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:32:09.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:09.968 issued rwts: total=4608,5040,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.968 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:09.968 job2: (groupid=0, jobs=1): err= 0: pid=1502733: Tue Nov 19 10:59:57 2024 00:32:09.968 read: IOPS=4318, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1008msec) 00:32:09.968 slat (usec): min=2, max=13610, avg=116.24, stdev=940.50 00:32:09.968 clat (usec): min=1241, max=28678, avg=14860.60, stdev=3420.11 00:32:09.968 lat (usec): min=9925, max=28684, avg=14976.83, stdev=3516.73 00:32:09.968 clat percentiles (usec): 00:32:09.968 | 1.00th=[ 9896], 5.00th=[10683], 10.00th=[11076], 20.00th=[12125], 00:32:09.968 | 30.00th=[12518], 40.00th=[13698], 50.00th=[14353], 60.00th=[14877], 00:32:09.968 | 70.00th=[15664], 80.00th=[17171], 90.00th=[20055], 95.00th=[21103], 00:32:09.968 | 99.00th=[25822], 99.50th=[26870], 99.90th=[28181], 99.95th=[28705], 00:32:09.968 | 99.99th=[28705] 00:32:09.968 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:32:09.968 slat (usec): min=4, max=12669, avg=103.01, stdev=866.06 00:32:09.968 clat (usec): min=1081, max=28686, avg=13704.30, stdev=3169.44 00:32:09.968 lat (usec): min=1093, max=28719, avg=13807.31, stdev=3236.51 00:32:09.968 clat percentiles (usec): 00:32:09.968 | 1.00th=[ 7046], 5.00th=[ 8225], 10.00th=[ 8848], 20.00th=[11863], 00:32:09.968 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13435], 60.00th=[14353], 00:32:09.968 | 70.00th=[15270], 80.00th=[16057], 90.00th=[17171], 95.00th=[19268], 00:32:09.968 | 99.00th=[22152], 99.50th=[25035], 99.90th=[27919], 99.95th=[28705], 00:32:09.968 | 99.99th=[28705] 00:32:09.968 bw ( KiB/s): min=17592, max=19272, per=24.80%, avg=18432.00, stdev=1187.94, samples=2 00:32:09.968 iops : min= 4398, max= 4818, avg=4608.00, stdev=296.98, samples=2 00:32:09.968 lat (msec) : 2=0.03%, 4=0.07%, 10=6.85%, 20=86.32%, 50=6.73% 00:32:09.968 cpu : usr=3.67%, sys=4.57%, ctx=231, majf=0, minf=1 00:32:09.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:32:09.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:09.968 issued rwts: total=4353,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.968 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:09.968 job3: (groupid=0, jobs=1): err= 0: pid=1502734: Tue Nov 19 10:59:57 2024 00:32:09.968 read: IOPS=3791, BW=14.8MiB/s (15.5MB/s)(14.9MiB/1007msec) 00:32:09.968 slat (usec): min=3, max=9461, avg=118.62, stdev=795.80 00:32:09.968 clat (usec): min=1110, max=38008, avg=15197.03, stdev=3897.89 00:32:09.968 lat (usec): min=9740, max=38014, avg=15315.65, stdev=3967.36 00:32:09.968 clat percentiles (usec): 00:32:09.968 | 1.00th=[10421], 5.00th=[10945], 10.00th=[11469], 20.00th=[12518], 00:32:09.968 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13566], 60.00th=[14353], 00:32:09.968 | 70.00th=[16057], 80.00th=[18482], 90.00th=[20055], 95.00th=[21890], 00:32:09.968 | 99.00th=[28967], 99.50th=[32900], 99.90th=[38011], 99.95th=[38011], 00:32:09.968 | 99.99th=[38011] 00:32:09.968 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:32:09.968 slat (usec): min=4, max=6756, avg=129.31, stdev=781.50 00:32:09.968 clat (usec): min=7194, max=50105, avg=16953.57, stdev=9311.46 00:32:09.968 lat (usec): min=7204, max=50119, avg=17082.87, stdev=9393.41 00:32:09.968 clat percentiles (usec): 00:32:09.968 | 1.00th=[10290], 5.00th=[11994], 10.00th=[12256], 20.00th=[12649], 00:32:09.968 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13698], 60.00th=[13829], 00:32:09.968 | 70.00th=[13960], 80.00th=[14353], 90.00th=[39584], 95.00th=[42206], 00:32:09.968 | 99.00th=[43779], 99.50th=[45876], 99.90th=[50070], 99.95th=[50070], 00:32:09.968 | 99.99th=[50070] 00:32:09.968 bw ( KiB/s): min=12672, max=20096, per=22.04%, avg=16384.00, stdev=5249.56, samples=2 00:32:09.968 iops : min= 3168, max= 5024, avg=4096.00, stdev=1312.39, samples=2 00:32:09.968 lat (msec) : 2=0.01%, 10=0.48%, 20=88.24%, 50=11.18%, 100=0.09% 00:32:09.968 cpu : usr=3.18%, sys=4.27%, ctx=231, majf=0, minf=1 00:32:09.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:09.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:09.968 issued rwts: total=3818,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.968 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:09.968 00:32:09.968 Run status group 0 (all jobs): 00:32:09.968 READ: bw=67.4MiB/s (70.7MB/s), 14.8MiB/s-17.9MiB/s (15.5MB/s-18.8MB/s), io=67.9MiB (71.2MB), run=1004-1008msec 00:32:09.968 WRITE: bw=72.6MiB/s (76.1MB/s), 15.9MiB/s-19.6MiB/s (16.7MB/s-20.6MB/s), io=73.2MiB (76.7MB), run=1004-1008msec 00:32:09.968 00:32:09.968 Disk stats (read/write): 00:32:09.968 nvme0n1: ios=4146/4406, merge=0/0, ticks=48490/47440, in_queue=95930, util=87.37% 00:32:09.968 nvme0n2: ios=3863/4096, merge=0/0, ticks=21267/24779, in_queue=46046, util=91.07% 00:32:09.968 nvme0n3: ios=3641/3864, merge=0/0, ticks=53382/52046, in_queue=105428, util=95.11% 00:32:09.968 nvme0n4: ios=3641/3759, merge=0/0, ticks=25268/25811, in_queue=51079, util=95.60% 00:32:09.968 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:09.968 [global] 00:32:09.968 thread=1 00:32:09.968 invalidate=1 00:32:09.968 rw=randwrite 00:32:09.968 time_based=1 00:32:09.968 runtime=1 00:32:09.968 ioengine=libaio 00:32:09.968 direct=1 00:32:09.968 bs=4096 00:32:09.968 iodepth=128 00:32:09.968 norandommap=0 00:32:09.968 numjobs=1 00:32:09.968 00:32:09.968 verify_dump=1 00:32:09.968 verify_backlog=512 00:32:09.968 verify_state_save=0 00:32:09.968 do_verify=1 00:32:09.968 verify=crc32c-intel 00:32:09.968 [job0] 00:32:09.968 filename=/dev/nvme0n1 00:32:09.968 [job1] 00:32:09.968 filename=/dev/nvme0n2 00:32:09.968 [job2] 00:32:09.968 filename=/dev/nvme0n3 00:32:09.968 [job3] 00:32:09.968 filename=/dev/nvme0n4 00:32:09.968 Could not set queue depth (nvme0n1) 00:32:09.968 Could not set queue depth (nvme0n2) 00:32:09.968 Could not set queue depth (nvme0n3) 00:32:09.968 Could not set queue depth (nvme0n4) 00:32:10.226 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:10.226 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:10.226 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:10.226 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:10.226 fio-3.35 00:32:10.226 Starting 4 threads 00:32:11.603 00:32:11.603 job0: (groupid=0, jobs=1): err= 0: pid=1502966: Tue Nov 19 10:59:58 2024 00:32:11.603 read: IOPS=2206, BW=8828KiB/s (9040kB/s)(8872KiB/1005msec) 00:32:11.603 slat (usec): min=3, max=14636, avg=181.04, stdev=1067.30 00:32:11.603 clat (usec): min=3255, max=43865, avg=23014.98, stdev=4952.89 00:32:11.603 lat (usec): min=8541, max=48590, avg=23196.02, stdev=5030.77 00:32:11.603 clat percentiles (usec): 00:32:11.603 | 1.00th=[ 8717], 5.00th=[16319], 10.00th=[16712], 20.00th=[19792], 00:32:11.603 | 30.00th=[20841], 40.00th=[21890], 50.00th=[23462], 60.00th=[24249], 00:32:11.603 | 70.00th=[24511], 80.00th=[25822], 90.00th=[27919], 95.00th=[32113], 00:32:11.603 | 99.00th=[40109], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:32:11.603 | 99.99th=[43779] 00:32:11.603 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:32:11.603 slat (usec): min=4, max=28066, avg=224.78, stdev=1423.57 00:32:11.603 clat (usec): min=12479, max=70347, avg=29498.83, stdev=14037.63 00:32:11.603 lat (usec): min=12486, max=70401, avg=29723.61, stdev=14173.45 00:32:11.603 clat percentiles (usec): 00:32:11.603 | 1.00th=[14222], 5.00th=[15270], 10.00th=[16450], 20.00th=[17957], 00:32:11.603 | 30.00th=[20579], 40.00th=[21103], 50.00th=[21627], 60.00th=[26084], 00:32:11.603 | 70.00th=[37487], 80.00th=[49021], 90.00th=[53740], 95.00th=[54264], 00:32:11.603 | 99.00th=[56886], 99.50th=[57410], 99.90th=[61080], 99.95th=[67634], 00:32:11.603 | 99.99th=[70779] 00:32:11.603 bw ( KiB/s): min= 8728, max=11775, per=16.23%, avg=10251.50, stdev=2154.55, samples=2 00:32:11.603 iops : min= 2182, max= 2943, avg=2562.50, stdev=538.11, samples=2 00:32:11.603 lat (msec) : 4=0.02%, 10=0.90%, 20=22.98%, 50=67.20%, 100=8.89% 00:32:11.603 cpu : usr=3.29%, sys=3.98%, ctx=170, majf=0, minf=1 00:32:11.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:32:11.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:11.603 issued rwts: total=2218,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:11.603 job1: (groupid=0, jobs=1): err= 0: pid=1502967: Tue Nov 19 10:59:58 2024 00:32:11.603 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:32:11.603 slat (usec): min=2, max=13617, avg=152.43, stdev=977.10 00:32:11.603 clat (usec): min=8032, max=37833, avg=20098.96, stdev=5403.93 00:32:11.603 lat (usec): min=8037, max=39520, avg=20251.39, stdev=5470.96 00:32:11.603 clat percentiles (usec): 00:32:11.603 | 1.00th=[11863], 5.00th=[13173], 10.00th=[13566], 20.00th=[14222], 00:32:11.603 | 30.00th=[15139], 40.00th=[17957], 50.00th=[20841], 60.00th=[22938], 00:32:11.603 | 70.00th=[23987], 80.00th=[24249], 90.00th=[26346], 95.00th=[28967], 00:32:11.603 | 99.00th=[33424], 99.50th=[34341], 99.90th=[34866], 99.95th=[36439], 00:32:11.603 | 99.99th=[38011] 00:32:11.603 write: IOPS=3072, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1005msec); 0 zone resets 00:32:11.603 slat (usec): min=3, max=30340, avg=164.32, stdev=1331.39 00:32:11.603 clat (usec): min=1693, max=75092, avg=20972.79, stdev=9323.95 00:32:11.603 lat (usec): min=7205, max=75137, avg=21137.11, stdev=9454.10 00:32:11.603 clat percentiles (usec): 00:32:11.603 | 1.00th=[ 9110], 5.00th=[13042], 10.00th=[13304], 20.00th=[13435], 00:32:11.603 | 30.00th=[13829], 40.00th=[16188], 50.00th=[20055], 60.00th=[20579], 00:32:11.603 | 70.00th=[21627], 80.00th=[26346], 90.00th=[36439], 95.00th=[39060], 00:32:11.603 | 99.00th=[56361], 99.50th=[56361], 99.90th=[57934], 99.95th=[63177], 00:32:11.603 | 99.99th=[74974] 00:32:11.603 bw ( KiB/s): min=11032, max=13544, per=19.45%, avg=12288.00, stdev=1776.25, samples=2 00:32:11.603 iops : min= 2758, max= 3386, avg=3072.00, stdev=444.06, samples=2 00:32:11.603 lat (msec) : 2=0.02%, 10=0.80%, 20=47.69%, 50=50.42%, 100=1.07% 00:32:11.603 cpu : usr=1.99%, sys=5.68%, ctx=148, majf=0, minf=1 00:32:11.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:32:11.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:11.603 issued rwts: total=3072,3088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:11.603 job2: (groupid=0, jobs=1): err= 0: pid=1502968: Tue Nov 19 10:59:58 2024 00:32:11.603 read: IOPS=4492, BW=17.5MiB/s (18.4MB/s)(17.6MiB/1003msec) 00:32:11.603 slat (usec): min=2, max=12109, avg=106.00, stdev=716.99 00:32:11.603 clat (usec): min=893, max=31373, avg=13488.63, stdev=3143.56 00:32:11.603 lat (usec): min=4322, max=31411, avg=13594.63, stdev=3203.79 00:32:11.603 clat percentiles (usec): 00:32:11.603 | 1.00th=[ 6587], 5.00th=[ 9241], 10.00th=[10552], 20.00th=[11600], 00:32:11.603 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:32:11.603 | 70.00th=[14353], 80.00th=[15795], 90.00th=[18744], 95.00th=[19530], 00:32:11.603 | 99.00th=[21890], 99.50th=[21890], 99.90th=[28181], 99.95th=[28705], 00:32:11.603 | 99.99th=[31327] 00:32:11.603 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:32:11.603 slat (usec): min=3, max=11248, avg=87.15, stdev=553.24 00:32:11.603 clat (usec): min=282, max=100012, avg=12952.84, stdev=8864.85 00:32:11.603 lat (usec): min=491, max=100025, avg=13039.99, stdev=8882.78 00:32:11.603 clat percentiles (usec): 00:32:11.603 | 1.00th=[ 1074], 5.00th=[ 3785], 10.00th=[ 7046], 20.00th=[ 11338], 00:32:11.603 | 30.00th=[ 11994], 40.00th=[ 12256], 50.00th=[ 12387], 60.00th=[ 12649], 00:32:11.603 | 70.00th=[ 13173], 80.00th=[ 13566], 90.00th=[ 15008], 95.00th=[ 17433], 00:32:11.603 | 99.00th=[ 65799], 99.50th=[ 86508], 99.90th=[ 99091], 99.95th=[ 99091], 00:32:11.603 | 99.99th=[100140] 00:32:11.603 bw ( KiB/s): min=20480, max=20480, per=32.42%, avg=20480.00, stdev= 0.00, samples=2 00:32:11.603 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:32:11.603 lat (usec) : 500=0.04%, 750=0.10%, 1000=0.28% 00:32:11.603 lat (msec) : 2=0.48%, 4=1.78%, 10=8.41%, 20=86.31%, 50=1.87% 00:32:11.603 lat (msec) : 100=0.72%, 250=0.01% 00:32:11.603 cpu : usr=4.79%, sys=8.68%, ctx=446, majf=0, minf=1 00:32:11.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:32:11.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:11.603 issued rwts: total=4506,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:11.603 job3: (groupid=0, jobs=1): err= 0: pid=1502969: Tue Nov 19 10:59:58 2024 00:32:11.603 read: IOPS=5059, BW=19.8MiB/s (20.7MB/s)(19.9MiB/1006msec) 00:32:11.603 slat (usec): min=2, max=11461, avg=103.25, stdev=811.03 00:32:11.603 clat (usec): min=1548, max=24770, avg=12942.88, stdev=3258.82 00:32:11.603 lat (usec): min=6425, max=24786, avg=13046.13, stdev=3325.45 00:32:11.603 clat percentiles (usec): 00:32:11.603 | 1.00th=[ 7963], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10814], 00:32:11.603 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12518], 00:32:11.603 | 70.00th=[12780], 80.00th=[13960], 90.00th=[18744], 95.00th=[20579], 00:32:11.603 | 99.00th=[23200], 99.50th=[23987], 99.90th=[24773], 99.95th=[24773], 00:32:11.604 | 99.99th=[24773] 00:32:11.604 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:32:11.604 slat (usec): min=3, max=10483, avg=84.52, stdev=567.97 00:32:11.604 clat (usec): min=1346, max=24734, avg=12054.67, stdev=2629.40 00:32:11.604 lat (usec): min=1355, max=24744, avg=12139.19, stdev=2657.53 00:32:11.604 clat percentiles (usec): 00:32:11.604 | 1.00th=[ 5276], 5.00th=[ 7635], 10.00th=[ 8094], 20.00th=[ 9765], 00:32:11.604 | 30.00th=[11338], 40.00th=[11994], 50.00th=[12649], 60.00th=[13042], 00:32:11.604 | 70.00th=[13304], 80.00th=[13566], 90.00th=[13960], 95.00th=[16450], 00:32:11.604 | 99.00th=[17695], 99.50th=[20317], 99.90th=[23462], 99.95th=[24249], 00:32:11.604 | 99.99th=[24773] 00:32:11.604 bw ( KiB/s): min=19856, max=21104, per=32.42%, avg=20480.00, stdev=882.47, samples=2 00:32:11.604 iops : min= 4964, max= 5276, avg=5120.00, stdev=220.62, samples=2 00:32:11.604 lat (msec) : 2=0.06%, 4=0.07%, 10=16.27%, 20=80.21%, 50=3.40% 00:32:11.604 cpu : usr=6.07%, sys=8.46%, ctx=437, majf=0, minf=1 00:32:11.604 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:11.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:11.604 issued rwts: total=5090,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.604 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:11.604 00:32:11.604 Run status group 0 (all jobs): 00:32:11.604 READ: bw=57.8MiB/s (60.6MB/s), 8828KiB/s-19.8MiB/s (9040kB/s-20.7MB/s), io=58.1MiB (61.0MB), run=1003-1006msec 00:32:11.604 WRITE: bw=61.7MiB/s (64.7MB/s), 9.95MiB/s-19.9MiB/s (10.4MB/s-20.9MB/s), io=62.1MiB (65.1MB), run=1003-1006msec 00:32:11.604 00:32:11.604 Disk stats (read/write): 00:32:11.604 nvme0n1: ios=1779/2048, merge=0/0, ticks=21298/31122, in_queue=52420, util=85.87% 00:32:11.604 nvme0n2: ios=2610/2739, merge=0/0, ticks=24360/27227, in_queue=51587, util=91.47% 00:32:11.604 nvme0n3: ios=3644/4555, merge=0/0, ticks=29382/35773, in_queue=65155, util=93.56% 00:32:11.604 nvme0n4: ios=4153/4562, merge=0/0, ticks=50170/52494, in_queue=102664, util=95.71% 00:32:11.604 10:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:11.604 10:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1503104 00:32:11.604 10:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:11.604 10:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:11.604 [global] 00:32:11.604 thread=1 00:32:11.604 invalidate=1 00:32:11.604 rw=read 00:32:11.604 time_based=1 00:32:11.604 runtime=10 00:32:11.604 ioengine=libaio 00:32:11.604 direct=1 00:32:11.604 bs=4096 00:32:11.604 iodepth=1 00:32:11.604 norandommap=1 00:32:11.604 numjobs=1 00:32:11.604 00:32:11.604 [job0] 00:32:11.604 filename=/dev/nvme0n1 00:32:11.604 [job1] 00:32:11.604 filename=/dev/nvme0n2 00:32:11.604 [job2] 00:32:11.604 filename=/dev/nvme0n3 00:32:11.604 [job3] 00:32:11.604 filename=/dev/nvme0n4 00:32:11.604 Could not set queue depth (nvme0n1) 00:32:11.604 Could not set queue depth (nvme0n2) 00:32:11.604 Could not set queue depth (nvme0n3) 00:32:11.604 Could not set queue depth (nvme0n4) 00:32:11.604 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:11.604 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:11.604 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:11.604 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:11.604 fio-3.35 00:32:11.604 Starting 4 threads 00:32:14.885 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:32:14.885 11:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:32:14.885 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=27938816, buflen=4096 00:32:14.885 fio: pid=1503195, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:14.885 11:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:14.885 11:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:32:14.885 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=43188224, buflen=4096 00:32:14.885 fio: pid=1503194, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:15.451 11:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:15.451 11:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:32:15.451 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=6172672, buflen=4096 00:32:15.451 fio: pid=1503192, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:15.710 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:15.710 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:32:15.710 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=30896128, buflen=4096 00:32:15.710 fio: pid=1503193, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:32:15.710 00:32:15.710 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1503192: Tue Nov 19 11:00:03 2024 00:32:15.710 read: IOPS=425, BW=1701KiB/s (1742kB/s)(6028KiB/3543msec) 00:32:15.710 slat (usec): min=4, max=26907, avg=31.48, stdev=714.39 00:32:15.710 clat (usec): min=183, max=44973, avg=2301.37, stdev=8829.59 00:32:15.710 lat (usec): min=197, max=68001, avg=2328.35, stdev=8936.39 00:32:15.710 clat percentiles (usec): 00:32:15.710 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 245], 00:32:15.710 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 277], 00:32:15.710 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 355], 95.00th=[10683], 00:32:15.710 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[44827], 00:32:15.710 | 99.99th=[44827] 00:32:15.710 bw ( KiB/s): min= 120, max= 6200, per=7.25%, avg=1992.00, stdev=2810.09, samples=6 00:32:15.710 iops : min= 30, max= 1550, avg=498.00, stdev=702.52, samples=6 00:32:15.710 lat (usec) : 250=28.85%, 500=64.59%, 750=1.26%, 1000=0.07% 00:32:15.710 lat (msec) : 2=0.07%, 4=0.07%, 20=0.07%, 50=4.97% 00:32:15.710 cpu : usr=0.23%, sys=0.62%, ctx=1510, majf=0, minf=2 00:32:15.710 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.710 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.710 issued rwts: total=1508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.710 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:15.710 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1503193: Tue Nov 19 11:00:03 2024 00:32:15.711 read: IOPS=1960, BW=7841KiB/s (8029kB/s)(29.5MiB/3848msec) 00:32:15.711 slat (usec): min=4, max=12951, avg=10.97, stdev=170.79 00:32:15.711 clat (usec): min=176, max=41108, avg=497.04, stdev=2986.56 00:32:15.711 lat (usec): min=189, max=53962, avg=507.06, stdev=3014.29 00:32:15.711 clat percentiles (usec): 00:32:15.711 | 1.00th=[ 210], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 237], 00:32:15.711 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 260], 00:32:15.711 | 70.00th=[ 269], 80.00th=[ 289], 90.00th=[ 379], 95.00th=[ 465], 00:32:15.711 | 99.00th=[ 570], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:15.711 | 99.99th=[41157] 00:32:15.711 bw ( KiB/s): min= 104, max=14560, per=31.33%, avg=8603.71, stdev=6396.94, samples=7 00:32:15.711 iops : min= 26, max= 3640, avg=2150.86, stdev=1599.34, samples=7 00:32:15.711 lat (usec) : 250=43.21%, 500=53.50%, 750=2.49%, 1000=0.20% 00:32:15.711 lat (msec) : 2=0.03%, 4=0.01%, 50=0.54% 00:32:15.711 cpu : usr=1.01%, sys=2.37%, ctx=7547, majf=0, minf=2 00:32:15.711 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.711 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.711 issued rwts: total=7544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.711 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:15.711 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1503194: Tue Nov 19 11:00:03 2024 00:32:15.711 read: IOPS=3270, BW=12.8MiB/s (13.4MB/s)(41.2MiB/3224msec) 00:32:15.711 slat (nsec): min=4166, max=77009, avg=9348.13, stdev=5764.23 00:32:15.711 clat (usec): min=193, max=2288, avg=291.89, stdev=81.05 00:32:15.711 lat (usec): min=200, max=2295, avg=301.24, stdev=83.41 00:32:15.711 clat percentiles (usec): 00:32:15.711 | 1.00th=[ 215], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 233], 00:32:15.711 | 30.00th=[ 241], 40.00th=[ 251], 50.00th=[ 273], 60.00th=[ 297], 00:32:15.711 | 70.00th=[ 310], 80.00th=[ 334], 90.00th=[ 383], 95.00th=[ 412], 00:32:15.711 | 99.00th=[ 519], 99.50th=[ 578], 99.90th=[ 1057], 99.95th=[ 1303], 00:32:15.711 | 99.99th=[ 1975] 00:32:15.711 bw ( KiB/s): min=10752, max=16408, per=47.05%, avg=12918.67, stdev=1922.94, samples=6 00:32:15.711 iops : min= 2688, max= 4102, avg=3229.67, stdev=480.74, samples=6 00:32:15.711 lat (usec) : 250=39.45%, 500=58.90%, 750=1.43%, 1000=0.10% 00:32:15.711 lat (msec) : 2=0.09%, 4=0.01% 00:32:15.711 cpu : usr=1.30%, sys=4.41%, ctx=10545, majf=0, minf=1 00:32:15.711 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.711 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.711 issued rwts: total=10545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.711 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:15.711 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1503195: Tue Nov 19 11:00:03 2024 00:32:15.711 read: IOPS=2308, BW=9233KiB/s (9455kB/s)(26.6MiB/2955msec) 00:32:15.711 slat (nsec): min=5389, max=50931, avg=7992.21, stdev=3957.89 00:32:15.711 clat (usec): min=181, max=41142, avg=421.55, stdev=2204.98 00:32:15.711 lat (usec): min=186, max=41150, avg=429.54, stdev=2205.91 00:32:15.711 clat percentiles (usec): 00:32:15.711 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 239], 00:32:15.711 | 30.00th=[ 247], 40.00th=[ 258], 50.00th=[ 277], 60.00th=[ 297], 00:32:15.711 | 70.00th=[ 318], 80.00th=[ 347], 90.00th=[ 392], 95.00th=[ 469], 00:32:15.711 | 99.00th=[ 611], 99.50th=[ 881], 99.90th=[41157], 99.95th=[41157], 00:32:15.711 | 99.99th=[41157] 00:32:15.711 bw ( KiB/s): min= 3144, max=13704, per=34.13%, avg=9372.80, stdev=4431.47, samples=5 00:32:15.711 iops : min= 786, max= 3426, avg=2343.20, stdev=1107.87, samples=5 00:32:15.711 lat (usec) : 250=34.10%, 500=61.90%, 750=3.21%, 1000=0.40% 00:32:15.711 lat (msec) : 2=0.06%, 4=0.01%, 20=0.01%, 50=0.29% 00:32:15.711 cpu : usr=1.05%, sys=3.05%, ctx=6822, majf=0, minf=1 00:32:15.711 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.711 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.711 issued rwts: total=6822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.711 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:15.711 00:32:15.711 Run status group 0 (all jobs): 00:32:15.711 READ: bw=26.8MiB/s (28.1MB/s), 1701KiB/s-12.8MiB/s (1742kB/s-13.4MB/s), io=103MiB (108MB), run=2955-3848msec 00:32:15.711 00:32:15.711 Disk stats (read/write): 00:32:15.711 nvme0n1: ios=1503/0, merge=0/0, ticks=3288/0, in_queue=3288, util=95.31% 00:32:15.711 nvme0n2: ios=7537/0, merge=0/0, ticks=3459/0, in_queue=3459, util=96.62% 00:32:15.711 nvme0n3: ios=10136/0, merge=0/0, ticks=2920/0, in_queue=2920, util=96.79% 00:32:15.711 nvme0n4: ios=6818/0, merge=0/0, ticks=2701/0, in_queue=2701, util=96.75% 00:32:15.970 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:15.970 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:32:16.229 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:16.229 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:32:16.487 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:16.487 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:32:16.745 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:16.745 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:32:17.003 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:32:17.003 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1503104 00:32:17.003 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:32:17.003 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:17.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:17.261 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:17.261 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:32:17.261 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:17.261 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:17.261 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:17.261 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:17.261 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:32:17.261 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:32:17.261 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:32:17.261 nvmf hotplug test: fio failed as expected 00:32:17.261 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:17.519 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:32:17.519 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:32:17.519 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:32:17.519 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:32:17.519 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:32:17.519 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:17.519 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:32:17.519 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:17.519 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:32:17.519 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:17.519 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:17.519 rmmod nvme_tcp 00:32:17.519 rmmod nvme_fabrics 00:32:17.519 rmmod nvme_keyring 00:32:17.519 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:17.519 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:32:17.519 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:32:17.519 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1501205 ']' 00:32:17.519 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1501205 00:32:17.519 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1501205 ']' 00:32:17.519 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1501205 00:32:17.519 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:32:17.519 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:17.519 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1501205 00:32:17.519 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:17.519 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:17.519 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1501205' 00:32:17.519 killing process with pid 1501205 00:32:17.519 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1501205 00:32:17.519 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1501205 00:32:17.777 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:17.777 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:17.777 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:17.778 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:32:17.778 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:32:17.778 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:17.778 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:32:17.778 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:17.778 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:17.778 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.778 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:17.778 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.683 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:19.683 00:32:19.683 real 0m23.937s 00:32:19.683 user 1m8.172s 00:32:19.683 sys 0m10.261s 00:32:19.683 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:19.683 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:19.683 ************************************ 00:32:19.683 END TEST nvmf_fio_target 00:32:19.683 ************************************ 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:19.942 ************************************ 00:32:19.942 START TEST nvmf_bdevio 00:32:19.942 ************************************ 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:19.942 * Looking for test storage... 00:32:19.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:19.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.942 --rc genhtml_branch_coverage=1 00:32:19.942 --rc genhtml_function_coverage=1 00:32:19.942 --rc genhtml_legend=1 00:32:19.942 --rc geninfo_all_blocks=1 00:32:19.942 --rc geninfo_unexecuted_blocks=1 00:32:19.942 00:32:19.942 ' 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:19.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.942 --rc genhtml_branch_coverage=1 00:32:19.942 --rc genhtml_function_coverage=1 00:32:19.942 --rc genhtml_legend=1 00:32:19.942 --rc geninfo_all_blocks=1 00:32:19.942 --rc geninfo_unexecuted_blocks=1 00:32:19.942 00:32:19.942 ' 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:19.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.942 --rc genhtml_branch_coverage=1 00:32:19.942 --rc genhtml_function_coverage=1 00:32:19.942 --rc genhtml_legend=1 00:32:19.942 --rc geninfo_all_blocks=1 00:32:19.942 --rc geninfo_unexecuted_blocks=1 00:32:19.942 00:32:19.942 ' 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:19.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.942 --rc genhtml_branch_coverage=1 00:32:19.942 --rc genhtml_function_coverage=1 00:32:19.942 --rc genhtml_legend=1 00:32:19.942 --rc geninfo_all_blocks=1 00:32:19.942 --rc geninfo_unexecuted_blocks=1 00:32:19.942 00:32:19.942 ' 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:32:19.942 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:32:19.943 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:21.926 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:21.927 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:21.927 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:21.927 Found net devices under 0000:09:00.0: cvl_0_0 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:21.927 Found net devices under 0000:09:00.1: cvl_0_1 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:21.927 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:22.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:22.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:32:22.186 00:32:22.186 --- 10.0.0.2 ping statistics --- 00:32:22.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.186 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:22.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:22.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:32:22.186 00:32:22.186 --- 10.0.0.1 ping statistics --- 00:32:22.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.186 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1506557 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1506557 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1506557 ']' 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:22.186 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:22.186 [2024-11-19 11:00:09.694975] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:22.186 [2024-11-19 11:00:09.696084] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:32:22.186 [2024-11-19 11:00:09.696160] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:22.186 [2024-11-19 11:00:09.769716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:22.446 [2024-11-19 11:00:09.832213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:22.446 [2024-11-19 11:00:09.832261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:22.446 [2024-11-19 11:00:09.832289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:22.446 [2024-11-19 11:00:09.832301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:22.446 [2024-11-19 11:00:09.832319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:22.446 [2024-11-19 11:00:09.834051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:22.446 [2024-11-19 11:00:09.834117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:22.446 [2024-11-19 11:00:09.834166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:22.446 [2024-11-19 11:00:09.834169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:22.446 [2024-11-19 11:00:09.931247] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:22.446 [2024-11-19 11:00:09.931487] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:22.446 [2024-11-19 11:00:09.931791] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:22.446 [2024-11-19 11:00:09.932420] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:22.446 [2024-11-19 11:00:09.932690] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:22.446 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:22.446 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:32:22.446 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:22.446 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:22.446 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:22.446 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:22.446 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:22.446 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.446 11:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:22.446 [2024-11-19 11:00:09.982861] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:22.446 Malloc0 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:22.446 [2024-11-19 11:00:10.059167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:22.446 { 00:32:22.446 "params": { 00:32:22.446 "name": "Nvme$subsystem", 00:32:22.446 "trtype": "$TEST_TRANSPORT", 00:32:22.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:22.446 "adrfam": "ipv4", 00:32:22.446 "trsvcid": "$NVMF_PORT", 00:32:22.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:22.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:22.446 "hdgst": ${hdgst:-false}, 00:32:22.446 "ddgst": ${ddgst:-false} 00:32:22.446 }, 00:32:22.446 "method": "bdev_nvme_attach_controller" 00:32:22.446 } 00:32:22.446 EOF 00:32:22.446 )") 00:32:22.446 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:32:22.705 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:32:22.705 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:32:22.705 11:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:22.705 "params": { 00:32:22.705 "name": "Nvme1", 00:32:22.705 "trtype": "tcp", 00:32:22.705 "traddr": "10.0.0.2", 00:32:22.705 "adrfam": "ipv4", 00:32:22.705 "trsvcid": "4420", 00:32:22.705 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:22.705 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:22.705 "hdgst": false, 00:32:22.705 "ddgst": false 00:32:22.705 }, 00:32:22.705 "method": "bdev_nvme_attach_controller" 00:32:22.705 }' 00:32:22.705 [2024-11-19 11:00:10.109886] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:32:22.705 [2024-11-19 11:00:10.109959] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1506593 ] 00:32:22.705 [2024-11-19 11:00:10.181660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:22.705 [2024-11-19 11:00:10.248362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:22.705 [2024-11-19 11:00:10.248385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:22.705 [2024-11-19 11:00:10.248389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.963 I/O targets: 00:32:22.963 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:22.963 00:32:22.963 00:32:22.963 CUnit - A unit testing framework for C - Version 2.1-3 00:32:22.963 http://cunit.sourceforge.net/ 00:32:22.963 00:32:22.963 00:32:22.963 Suite: bdevio tests on: Nvme1n1 00:32:22.963 Test: blockdev write read block ...passed 00:32:22.963 Test: blockdev write zeroes read block ...passed 00:32:22.963 Test: blockdev write zeroes read no split ...passed 00:32:22.963 Test: blockdev write zeroes read split ...passed 00:32:22.963 Test: blockdev write zeroes read split partial ...passed 00:32:22.963 Test: blockdev reset ...[2024-11-19 11:00:10.577625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:22.963 [2024-11-19 11:00:10.577732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cf640 (9): Bad file descriptor 00:32:23.220 [2024-11-19 11:00:10.622625] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:23.220 passed 00:32:23.220 Test: blockdev write read 8 blocks ...passed 00:32:23.220 Test: blockdev write read size > 128k ...passed 00:32:23.220 Test: blockdev write read invalid size ...passed 00:32:23.220 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:23.220 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:23.220 Test: blockdev write read max offset ...passed 00:32:23.220 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:23.220 Test: blockdev writev readv 8 blocks ...passed 00:32:23.220 Test: blockdev writev readv 30 x 1block ...passed 00:32:23.220 Test: blockdev writev readv block ...passed 00:32:23.220 Test: blockdev writev readv size > 128k ...passed 00:32:23.220 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:23.220 Test: blockdev comparev and writev ...[2024-11-19 11:00:10.794496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:23.220 [2024-11-19 11:00:10.794534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.220 [2024-11-19 11:00:10.794558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:23.221 [2024-11-19 11:00:10.794575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:23.221 [2024-11-19 11:00:10.794947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:23.221 [2024-11-19 11:00:10.794971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:23.221 [2024-11-19 11:00:10.794993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:23.221 [2024-11-19 11:00:10.795008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:23.221 [2024-11-19 11:00:10.795384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:23.221 [2024-11-19 11:00:10.795408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:23.221 [2024-11-19 11:00:10.795429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:23.221 [2024-11-19 11:00:10.795445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:23.221 [2024-11-19 11:00:10.795811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:23.221 [2024-11-19 11:00:10.795834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:23.221 [2024-11-19 11:00:10.795855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:23.221 [2024-11-19 11:00:10.795871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:23.221 passed 00:32:23.479 Test: blockdev nvme passthru rw ...passed 00:32:23.479 Test: blockdev nvme passthru vendor specific ...[2024-11-19 11:00:10.877572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:23.479 [2024-11-19 11:00:10.877601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:23.479 [2024-11-19 11:00:10.877752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:23.479 [2024-11-19 11:00:10.877775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:23.479 [2024-11-19 11:00:10.877922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:23.479 [2024-11-19 11:00:10.877945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:23.479 [2024-11-19 11:00:10.878091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:23.479 [2024-11-19 11:00:10.878115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:23.479 passed 00:32:23.479 Test: blockdev nvme admin passthru ...passed 00:32:23.479 Test: blockdev copy ...passed 00:32:23.479 00:32:23.479 Run Summary: Type Total Ran Passed Failed Inactive 00:32:23.479 suites 1 1 n/a 0 0 00:32:23.479 tests 23 23 23 0 0 00:32:23.479 asserts 152 152 152 0 n/a 00:32:23.479 00:32:23.479 Elapsed time = 0.912 seconds 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:23.737 rmmod nvme_tcp 00:32:23.737 rmmod nvme_fabrics 00:32:23.737 rmmod nvme_keyring 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1506557 ']' 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1506557 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1506557 ']' 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1506557 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1506557 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1506557' 00:32:23.737 killing process with pid 1506557 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1506557 00:32:23.737 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1506557 00:32:23.996 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:23.996 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:23.996 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:23.996 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:23.996 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:32:23.996 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:32:23.996 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:23.996 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:23.996 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:23.996 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.996 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:23.996 11:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.900 11:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:25.900 00:32:25.900 real 0m6.137s 00:32:25.900 user 0m7.565s 00:32:25.900 sys 0m2.472s 00:32:25.900 11:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.900 11:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:25.900 ************************************ 00:32:25.900 END TEST nvmf_bdevio 00:32:25.900 ************************************ 00:32:25.900 11:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:25.900 00:32:25.900 real 3m56.221s 00:32:25.900 user 8m57.552s 00:32:25.900 sys 1m24.752s 00:32:25.900 11:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.900 11:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:25.900 ************************************ 00:32:25.900 END TEST nvmf_target_core_interrupt_mode 00:32:25.900 ************************************ 00:32:26.159 11:00:13 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:26.159 11:00:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:26.159 11:00:13 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:26.159 11:00:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:26.159 ************************************ 00:32:26.159 START TEST nvmf_interrupt 00:32:26.159 ************************************ 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:26.159 * Looking for test storage... 00:32:26.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:26.159 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:26.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.160 --rc genhtml_branch_coverage=1 00:32:26.160 --rc genhtml_function_coverage=1 00:32:26.160 --rc genhtml_legend=1 00:32:26.160 --rc geninfo_all_blocks=1 00:32:26.160 --rc geninfo_unexecuted_blocks=1 00:32:26.160 00:32:26.160 ' 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:26.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.160 --rc genhtml_branch_coverage=1 00:32:26.160 --rc genhtml_function_coverage=1 00:32:26.160 --rc genhtml_legend=1 00:32:26.160 --rc geninfo_all_blocks=1 00:32:26.160 --rc geninfo_unexecuted_blocks=1 00:32:26.160 00:32:26.160 ' 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:26.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.160 --rc genhtml_branch_coverage=1 00:32:26.160 --rc genhtml_function_coverage=1 00:32:26.160 --rc genhtml_legend=1 00:32:26.160 --rc geninfo_all_blocks=1 00:32:26.160 --rc geninfo_unexecuted_blocks=1 00:32:26.160 00:32:26.160 ' 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:26.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.160 --rc genhtml_branch_coverage=1 00:32:26.160 --rc genhtml_function_coverage=1 00:32:26.160 --rc genhtml_legend=1 00:32:26.160 --rc geninfo_all_blocks=1 00:32:26.160 --rc geninfo_unexecuted_blocks=1 00:32:26.160 00:32:26.160 ' 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:26.160 11:00:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:28.691 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:28.691 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:28.691 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:28.692 Found net devices under 0000:09:00.0: cvl_0_0 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:28.692 Found net devices under 0000:09:00.1: cvl_0_1 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:28.692 11:00:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:28.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:28.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:32:28.692 00:32:28.692 --- 10.0.0.2 ping statistics --- 00:32:28.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.692 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:28.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:28.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:32:28.692 00:32:28.692 --- 10.0.0.1 ping statistics --- 00:32:28.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.692 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1508683 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1508683 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1508683 ']' 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:28.692 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:28.692 [2024-11-19 11:00:16.132412] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:28.692 [2024-11-19 11:00:16.133488] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:32:28.692 [2024-11-19 11:00:16.133553] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:28.692 [2024-11-19 11:00:16.203941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:28.692 [2024-11-19 11:00:16.257878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:28.692 [2024-11-19 11:00:16.257930] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:28.692 [2024-11-19 11:00:16.257957] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:28.692 [2024-11-19 11:00:16.257968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:28.692 [2024-11-19 11:00:16.257976] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:28.692 [2024-11-19 11:00:16.259274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.692 [2024-11-19 11:00:16.259279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.952 [2024-11-19 11:00:16.345612] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:28.952 [2024-11-19 11:00:16.345644] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:28.952 [2024-11-19 11:00:16.345899] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:28.952 5000+0 records in 00:32:28.952 5000+0 records out 00:32:28.952 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0137584 s, 744 MB/s 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:28.952 AIO0 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:28.952 [2024-11-19 11:00:16.456004] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:28.952 [2024-11-19 11:00:16.484312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1508683 0 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1508683 0 idle 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1508683 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1508683 -w 256 00:32:28.952 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1508683 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.26 reactor_0' 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1508683 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.26 reactor_0 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1508683 1 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1508683 1 idle 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1508683 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1508683 -w 256 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1508688 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1' 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1508688 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1508848 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1508683 0 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1508683 0 busy 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1508683 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:29.212 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:29.213 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:29.213 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:29.213 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:29.213 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:29.213 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1508683 -w 256 00:32:29.213 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:29.471 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1508683 root 20 0 128.2g 48768 35328 R 86.7 0.1 0:00.39 reactor_0' 00:32:29.471 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1508683 root 20 0 128.2g 48768 35328 R 86.7 0.1 0:00.39 reactor_0 00:32:29.471 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:29.471 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:29.471 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=86.7 00:32:29.471 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=86 00:32:29.471 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:29.471 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:29.472 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:29.472 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:29.472 11:00:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:29.472 11:00:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:29.472 11:00:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1508683 1 00:32:29.472 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1508683 1 busy 00:32:29.472 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1508683 00:32:29.472 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:29.472 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:29.472 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:29.472 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:29.472 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:29.472 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:29.472 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:29.472 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:29.472 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1508683 -w 256 00:32:29.472 11:00:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:29.730 11:00:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1508688 root 20 0 128.2g 48768 35328 R 93.3 0.1 0:00.21 reactor_1' 00:32:29.730 11:00:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1508688 root 20 0 128.2g 48768 35328 R 93.3 0.1 0:00.21 reactor_1 00:32:29.730 11:00:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:29.730 11:00:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:29.730 11:00:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:32:29.730 11:00:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:32:29.730 11:00:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:29.730 11:00:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:29.730 11:00:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:29.730 11:00:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:29.730 11:00:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1508848 00:32:39.699 Initializing NVMe Controllers 00:32:39.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:39.699 Controller IO queue size 256, less than required. 00:32:39.699 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:39.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:39.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:39.699 Initialization complete. Launching workers. 00:32:39.699 ======================================================== 00:32:39.699 Latency(us) 00:32:39.699 Device Information : IOPS MiB/s Average min max 00:32:39.699 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13138.37 51.32 19499.44 4154.10 23454.31 00:32:39.699 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13876.47 54.20 18460.94 4210.50 30191.64 00:32:39.699 ======================================================== 00:32:39.699 Total : 27014.84 105.53 18966.00 4154.10 30191.64 00:32:39.699 00:32:39.699 11:00:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:39.699 11:00:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1508683 0 00:32:39.699 11:00:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1508683 0 idle 00:32:39.699 11:00:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1508683 00:32:39.699 11:00:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:39.699 11:00:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:39.699 11:00:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:39.699 11:00:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:39.699 11:00:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:39.699 11:00:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:39.699 11:00:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:39.699 11:00:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:39.700 11:00:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:39.700 11:00:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1508683 -w 256 00:32:39.700 11:00:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1508683 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:19.77 reactor_0' 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1508683 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:19.77 reactor_0 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1508683 1 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1508683 1 idle 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1508683 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1508683 -w 256 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1508688 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:09.54 reactor_1' 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1508688 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:09.54 reactor_1 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:39.700 11:00:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:39.958 11:00:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:39.958 11:00:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:39.958 11:00:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:39.958 11:00:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:39.958 11:00:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:42.485 11:00:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:42.485 11:00:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:42.485 11:00:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:42.485 11:00:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:42.485 11:00:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:42.485 11:00:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:42.485 11:00:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:42.485 11:00:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1508683 0 00:32:42.485 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1508683 0 idle 00:32:42.485 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1508683 00:32:42.485 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:42.485 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:42.485 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:42.485 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:42.485 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:42.485 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1508683 -w 256 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1508683 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:19.87 reactor_0' 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1508683 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:19.87 reactor_0 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1508683 1 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1508683 1 idle 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1508683 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1508683 -w 256 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1508688 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:09.57 reactor_1' 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1508688 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:09.57 reactor_1 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:42.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:42.486 11:00:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:42.486 11:00:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:42.486 11:00:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:42.486 11:00:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:42.486 11:00:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:42.486 11:00:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:42.486 11:00:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:42.486 11:00:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:42.486 11:00:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:42.486 11:00:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:42.486 11:00:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:42.486 11:00:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:42.486 rmmod nvme_tcp 00:32:42.486 rmmod nvme_fabrics 00:32:42.486 rmmod nvme_keyring 00:32:42.486 11:00:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:42.486 11:00:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:42.486 11:00:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:42.486 11:00:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1508683 ']' 00:32:42.486 11:00:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1508683 00:32:42.486 11:00:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1508683 ']' 00:32:42.486 11:00:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1508683 00:32:42.486 11:00:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:42.486 11:00:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:42.486 11:00:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1508683 00:32:42.745 11:00:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:42.745 11:00:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:42.745 11:00:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1508683' 00:32:42.745 killing process with pid 1508683 00:32:42.745 11:00:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1508683 00:32:42.745 11:00:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1508683 00:32:42.745 11:00:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:42.745 11:00:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:42.745 11:00:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:42.745 11:00:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:42.745 11:00:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:42.745 11:00:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:42.745 11:00:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:42.745 11:00:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:42.745 11:00:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:42.745 11:00:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.745 11:00:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:42.745 11:00:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.283 11:00:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:45.283 00:32:45.283 real 0m18.851s 00:32:45.283 user 0m36.903s 00:32:45.283 sys 0m6.632s 00:32:45.283 11:00:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:45.283 11:00:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:45.283 ************************************ 00:32:45.283 END TEST nvmf_interrupt 00:32:45.283 ************************************ 00:32:45.283 00:32:45.283 real 25m2.912s 00:32:45.283 user 58m57.343s 00:32:45.283 sys 6m41.546s 00:32:45.284 11:00:32 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:45.284 11:00:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:45.284 ************************************ 00:32:45.284 END TEST nvmf_tcp 00:32:45.284 ************************************ 00:32:45.284 11:00:32 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:45.284 11:00:32 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:45.284 11:00:32 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:45.284 11:00:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:45.284 11:00:32 -- common/autotest_common.sh@10 -- # set +x 00:32:45.284 ************************************ 00:32:45.284 START TEST spdkcli_nvmf_tcp 00:32:45.284 ************************************ 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:45.284 * Looking for test storage... 00:32:45.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:45.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.284 --rc genhtml_branch_coverage=1 00:32:45.284 --rc genhtml_function_coverage=1 00:32:45.284 --rc genhtml_legend=1 00:32:45.284 --rc geninfo_all_blocks=1 00:32:45.284 --rc geninfo_unexecuted_blocks=1 00:32:45.284 00:32:45.284 ' 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:45.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.284 --rc genhtml_branch_coverage=1 00:32:45.284 --rc genhtml_function_coverage=1 00:32:45.284 --rc genhtml_legend=1 00:32:45.284 --rc geninfo_all_blocks=1 00:32:45.284 --rc geninfo_unexecuted_blocks=1 00:32:45.284 00:32:45.284 ' 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:45.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.284 --rc genhtml_branch_coverage=1 00:32:45.284 --rc genhtml_function_coverage=1 00:32:45.284 --rc genhtml_legend=1 00:32:45.284 --rc geninfo_all_blocks=1 00:32:45.284 --rc geninfo_unexecuted_blocks=1 00:32:45.284 00:32:45.284 ' 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:45.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.284 --rc genhtml_branch_coverage=1 00:32:45.284 --rc genhtml_function_coverage=1 00:32:45.284 --rc genhtml_legend=1 00:32:45.284 --rc geninfo_all_blocks=1 00:32:45.284 --rc geninfo_unexecuted_blocks=1 00:32:45.284 00:32:45.284 ' 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.284 11:00:32 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:45.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1510840 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1510840 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1510840 ']' 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:45.285 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:45.285 [2024-11-19 11:00:32.698033] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:32:45.285 [2024-11-19 11:00:32.698113] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1510840 ] 00:32:45.285 [2024-11-19 11:00:32.764435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:45.285 [2024-11-19 11:00:32.821513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.285 [2024-11-19 11:00:32.821518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.544 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:45.544 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:45.544 11:00:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:45.544 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:45.544 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:45.544 11:00:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:45.544 11:00:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:45.544 11:00:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:45.544 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:45.544 11:00:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:45.544 11:00:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:45.544 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:45.544 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:45.544 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:45.544 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:45.544 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:45.544 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:45.544 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:45.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:45.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:45.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:45.544 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:45.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:45.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:45.544 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:45.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:45.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:45.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:45.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:45.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:45.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:45.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:45.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:45.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:45.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:45.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:45.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:45.544 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:45.544 ' 00:32:48.073 [2024-11-19 11:00:35.574982] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:49.446 [2024-11-19 11:00:36.843378] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:51.972 [2024-11-19 11:00:39.186358] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:53.870 [2024-11-19 11:00:41.200822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:55.243 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:55.243 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:55.243 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:55.243 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:55.244 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:55.244 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:55.244 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:55.244 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:55.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:55.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:55.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:55.244 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:55.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:55.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:55.244 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:55.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:55.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:55.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:55.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:55.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:55.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:55.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:55.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:55.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:55.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:55.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:55.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:55.244 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:55.244 11:00:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:55.244 11:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:55.244 11:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:55.502 11:00:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:55.502 11:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:55.502 11:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:55.502 11:00:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:55.502 11:00:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:55.760 11:00:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:55.760 11:00:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:56.018 11:00:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:56.018 11:00:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:56.018 11:00:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:56.018 11:00:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:56.018 11:00:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:56.018 11:00:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:56.018 11:00:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:56.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:56.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:56.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:56.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:56.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:56.018 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:56.018 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:56.018 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:56.018 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:56.018 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:56.018 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:56.018 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:56.018 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:56.018 ' 00:33:01.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:01.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:01.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:01.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:01.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:01.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:01.345 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:01.345 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:01.345 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:01.345 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:01.345 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:01.345 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:01.345 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:01.345 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:01.345 11:00:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:01.345 11:00:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:01.345 11:00:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:01.345 11:00:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1510840 00:33:01.345 11:00:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1510840 ']' 00:33:01.345 11:00:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1510840 00:33:01.345 11:00:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:33:01.345 11:00:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:01.345 11:00:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1510840 00:33:01.345 11:00:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:01.345 11:00:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:01.345 11:00:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1510840' 00:33:01.345 killing process with pid 1510840 00:33:01.345 11:00:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1510840 00:33:01.345 11:00:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1510840 00:33:01.604 11:00:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:01.604 11:00:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:01.604 11:00:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1510840 ']' 00:33:01.604 11:00:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1510840 00:33:01.604 11:00:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1510840 ']' 00:33:01.604 11:00:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1510840 00:33:01.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1510840) - No such process 00:33:01.604 11:00:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1510840 is not found' 00:33:01.604 Process with pid 1510840 is not found 00:33:01.604 11:00:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:01.604 11:00:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:01.604 11:00:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:01.604 00:33:01.604 real 0m16.601s 00:33:01.604 user 0m35.372s 00:33:01.604 sys 0m0.754s 00:33:01.604 11:00:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:01.604 11:00:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:01.604 ************************************ 00:33:01.604 END TEST spdkcli_nvmf_tcp 00:33:01.604 ************************************ 00:33:01.604 11:00:49 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:01.604 11:00:49 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:01.604 11:00:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:01.604 11:00:49 -- common/autotest_common.sh@10 -- # set +x 00:33:01.604 ************************************ 00:33:01.604 START TEST nvmf_identify_passthru 00:33:01.604 ************************************ 00:33:01.604 11:00:49 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:01.604 * Looking for test storage... 00:33:01.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:01.604 11:00:49 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:01.604 11:00:49 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:33:01.604 11:00:49 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:01.863 11:00:49 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:01.863 11:00:49 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:01.863 11:00:49 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:01.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.863 --rc genhtml_branch_coverage=1 00:33:01.863 --rc genhtml_function_coverage=1 00:33:01.863 --rc genhtml_legend=1 00:33:01.863 --rc geninfo_all_blocks=1 00:33:01.863 --rc geninfo_unexecuted_blocks=1 00:33:01.863 00:33:01.863 ' 00:33:01.863 11:00:49 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:01.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.863 --rc genhtml_branch_coverage=1 00:33:01.863 --rc genhtml_function_coverage=1 00:33:01.863 --rc genhtml_legend=1 00:33:01.863 --rc geninfo_all_blocks=1 00:33:01.863 --rc geninfo_unexecuted_blocks=1 00:33:01.863 00:33:01.863 ' 00:33:01.863 11:00:49 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:01.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.863 --rc genhtml_branch_coverage=1 00:33:01.863 --rc genhtml_function_coverage=1 00:33:01.863 --rc genhtml_legend=1 00:33:01.863 --rc geninfo_all_blocks=1 00:33:01.863 --rc geninfo_unexecuted_blocks=1 00:33:01.863 00:33:01.863 ' 00:33:01.863 11:00:49 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:01.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.863 --rc genhtml_branch_coverage=1 00:33:01.863 --rc genhtml_function_coverage=1 00:33:01.863 --rc genhtml_legend=1 00:33:01.863 --rc geninfo_all_blocks=1 00:33:01.863 --rc geninfo_unexecuted_blocks=1 00:33:01.863 00:33:01.863 ' 00:33:01.863 11:00:49 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:01.863 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:01.863 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:01.863 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:01.863 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:01.863 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:01.863 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:01.863 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:01.863 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:01.863 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:01.863 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:01.863 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:01.863 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:01.863 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:01.863 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:01.863 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:01.863 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:01.863 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:01.863 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:01.863 11:00:49 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:01.863 11:00:49 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.863 11:00:49 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.863 11:00:49 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.863 11:00:49 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:01.863 11:00:49 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.864 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:01.864 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:01.864 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:01.864 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:01.864 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:01.864 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:01.864 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:01.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:01.864 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:01.864 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:01.864 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:01.864 11:00:49 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:01.864 11:00:49 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:01.864 11:00:49 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:01.864 11:00:49 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:01.864 11:00:49 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:01.864 11:00:49 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.864 11:00:49 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.864 11:00:49 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.864 11:00:49 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:01.864 11:00:49 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.864 11:00:49 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:01.864 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:01.864 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:01.864 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:01.864 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:01.864 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:01.864 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.864 11:00:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:01.864 11:00:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.864 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:01.864 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:01.864 11:00:49 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:01.864 11:00:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:04.394 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:04.394 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:04.394 Found net devices under 0000:09:00.0: cvl_0_0 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:04.394 Found net devices under 0000:09:00.1: cvl_0_1 00:33:04.394 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:04.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:04.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:33:04.395 00:33:04.395 --- 10.0.0.2 ping statistics --- 00:33:04.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:04.395 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:04.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:04.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:33:04.395 00:33:04.395 --- 10.0.0.1 ping statistics --- 00:33:04.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:04.395 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:04.395 11:00:51 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:04.395 11:00:51 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:04.395 11:00:51 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:04.395 11:00:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:04.395 11:00:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:04.395 11:00:51 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:04.395 11:00:51 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:33:04.395 11:00:51 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:04.395 11:00:51 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:04.395 11:00:51 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:04.395 11:00:51 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:33:04.395 11:00:51 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:04.395 11:00:51 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:04.395 11:00:51 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:04.395 11:00:51 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:04.395 11:00:51 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:33:04.395 11:00:51 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:0b:00.0 00:33:04.395 11:00:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:33:04.395 11:00:51 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:33:04.395 11:00:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:33:04.395 11:00:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:04.395 11:00:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:08.574 11:00:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:33:08.574 11:00:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:33:08.574 11:00:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:08.574 11:00:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:12.757 11:01:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:12.757 11:01:00 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:12.757 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:12.757 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:12.757 11:01:00 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:12.757 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:12.757 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:12.757 11:01:00 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1515475 00:33:12.757 11:01:00 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:12.757 11:01:00 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:12.757 11:01:00 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1515475 00:33:12.757 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1515475 ']' 00:33:12.757 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:12.757 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:12.757 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:12.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:12.757 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:12.757 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:12.757 [2024-11-19 11:01:00.131750] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:33:12.757 [2024-11-19 11:01:00.131852] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:12.757 [2024-11-19 11:01:00.206361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:12.757 [2024-11-19 11:01:00.266278] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:12.757 [2024-11-19 11:01:00.266363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:12.757 [2024-11-19 11:01:00.266379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:12.757 [2024-11-19 11:01:00.266391] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:12.757 [2024-11-19 11:01:00.266401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:12.757 [2024-11-19 11:01:00.267955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.757 [2024-11-19 11:01:00.268015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:12.757 [2024-11-19 11:01:00.268082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:12.757 [2024-11-19 11:01:00.268085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.757 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:12.757 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:33:12.757 11:01:00 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:12.757 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.757 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:12.757 INFO: Log level set to 20 00:33:12.757 INFO: Requests: 00:33:12.757 { 00:33:12.757 "jsonrpc": "2.0", 00:33:12.757 "method": "nvmf_set_config", 00:33:12.757 "id": 1, 00:33:12.757 "params": { 00:33:12.757 "admin_cmd_passthru": { 00:33:12.757 "identify_ctrlr": true 00:33:12.757 } 00:33:12.757 } 00:33:12.757 } 00:33:12.757 00:33:12.757 INFO: response: 00:33:12.757 { 00:33:12.757 "jsonrpc": "2.0", 00:33:12.757 "id": 1, 00:33:12.757 "result": true 00:33:12.757 } 00:33:12.757 00:33:12.757 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.757 11:01:00 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:12.757 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.757 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:12.757 INFO: Setting log level to 20 00:33:12.757 INFO: Setting log level to 20 00:33:12.757 INFO: Log level set to 20 00:33:12.757 INFO: Log level set to 20 00:33:12.757 INFO: Requests: 00:33:12.757 { 00:33:12.757 "jsonrpc": "2.0", 00:33:12.757 "method": "framework_start_init", 00:33:12.757 "id": 1 00:33:12.757 } 00:33:12.757 00:33:12.757 INFO: Requests: 00:33:12.757 { 00:33:12.757 "jsonrpc": "2.0", 00:33:12.757 "method": "framework_start_init", 00:33:12.757 "id": 1 00:33:12.757 } 00:33:12.757 00:33:13.015 [2024-11-19 11:01:00.464723] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:13.015 INFO: response: 00:33:13.015 { 00:33:13.015 "jsonrpc": "2.0", 00:33:13.015 "id": 1, 00:33:13.015 "result": true 00:33:13.015 } 00:33:13.015 00:33:13.015 INFO: response: 00:33:13.015 { 00:33:13.015 "jsonrpc": "2.0", 00:33:13.015 "id": 1, 00:33:13.016 "result": true 00:33:13.016 } 00:33:13.016 00:33:13.016 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.016 11:01:00 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:13.016 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.016 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:13.016 INFO: Setting log level to 40 00:33:13.016 INFO: Setting log level to 40 00:33:13.016 INFO: Setting log level to 40 00:33:13.016 [2024-11-19 11:01:00.474762] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:13.016 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.016 11:01:00 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:13.016 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:13.016 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:13.016 11:01:00 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:33:13.016 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.016 11:01:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:16.303 Nvme0n1 00:33:16.303 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.303 11:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:16.303 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.303 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:16.303 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.303 11:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:16.303 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.303 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:16.303 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.303 11:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:16.303 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.303 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:16.303 [2024-11-19 11:01:03.381642] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:16.303 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.303 11:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:16.303 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.303 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:16.303 [ 00:33:16.303 { 00:33:16.303 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:16.303 "subtype": "Discovery", 00:33:16.303 "listen_addresses": [], 00:33:16.303 "allow_any_host": true, 00:33:16.303 "hosts": [] 00:33:16.303 }, 00:33:16.303 { 00:33:16.303 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:16.303 "subtype": "NVMe", 00:33:16.303 "listen_addresses": [ 00:33:16.303 { 00:33:16.303 "trtype": "TCP", 00:33:16.303 "adrfam": "IPv4", 00:33:16.303 "traddr": "10.0.0.2", 00:33:16.303 "trsvcid": "4420" 00:33:16.303 } 00:33:16.303 ], 00:33:16.303 "allow_any_host": true, 00:33:16.303 "hosts": [], 00:33:16.303 "serial_number": "SPDK00000000000001", 00:33:16.303 "model_number": "SPDK bdev Controller", 00:33:16.303 "max_namespaces": 1, 00:33:16.303 "min_cntlid": 1, 00:33:16.303 "max_cntlid": 65519, 00:33:16.303 "namespaces": [ 00:33:16.303 { 00:33:16.303 "nsid": 1, 00:33:16.303 "bdev_name": "Nvme0n1", 00:33:16.303 "name": "Nvme0n1", 00:33:16.303 "nguid": "87641B5231F54740AE3C81BE3C537BD3", 00:33:16.303 "uuid": "87641b52-31f5-4740-ae3c-81be3c537bd3" 00:33:16.303 } 00:33:16.303 ] 00:33:16.303 } 00:33:16.303 ] 00:33:16.303 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.303 11:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:16.304 11:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:16.304 11:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:16.304 11:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:33:16.304 11:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:16.304 11:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:16.304 11:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:16.304 11:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:16.304 11:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:33:16.304 11:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:16.304 11:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:16.304 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.304 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:16.304 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.304 11:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:16.304 11:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:16.304 11:01:03 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:16.304 11:01:03 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:33:16.304 11:01:03 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:16.304 11:01:03 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:33:16.304 11:01:03 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:16.304 11:01:03 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:16.304 rmmod nvme_tcp 00:33:16.304 rmmod nvme_fabrics 00:33:16.304 rmmod nvme_keyring 00:33:16.561 11:01:03 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:16.561 11:01:03 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:33:16.561 11:01:03 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:33:16.561 11:01:03 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1515475 ']' 00:33:16.561 11:01:03 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1515475 00:33:16.561 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1515475 ']' 00:33:16.561 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1515475 00:33:16.561 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:33:16.561 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:16.561 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1515475 00:33:16.561 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:16.561 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:16.561 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1515475' 00:33:16.561 killing process with pid 1515475 00:33:16.561 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1515475 00:33:16.561 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1515475 00:33:17.935 11:01:05 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:17.935 11:01:05 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:17.935 11:01:05 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:17.935 11:01:05 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:17.935 11:01:05 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:33:17.935 11:01:05 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:17.935 11:01:05 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:33:17.935 11:01:05 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:17.935 11:01:05 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:17.935 11:01:05 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.935 11:01:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:17.935 11:01:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:20.472 11:01:07 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:20.472 00:33:20.472 real 0m18.346s 00:33:20.472 user 0m26.388s 00:33:20.472 sys 0m3.383s 00:33:20.472 11:01:07 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:20.472 11:01:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:20.472 ************************************ 00:33:20.472 END TEST nvmf_identify_passthru 00:33:20.472 ************************************ 00:33:20.472 11:01:07 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:20.472 11:01:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:20.472 11:01:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:20.472 11:01:07 -- common/autotest_common.sh@10 -- # set +x 00:33:20.472 ************************************ 00:33:20.472 START TEST nvmf_dif 00:33:20.472 ************************************ 00:33:20.472 11:01:07 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:20.472 * Looking for test storage... 00:33:20.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:20.472 11:01:07 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:20.472 11:01:07 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:33:20.472 11:01:07 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:20.472 11:01:07 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:20.472 11:01:07 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:20.472 11:01:07 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:20.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.472 --rc genhtml_branch_coverage=1 00:33:20.472 --rc genhtml_function_coverage=1 00:33:20.472 --rc genhtml_legend=1 00:33:20.472 --rc geninfo_all_blocks=1 00:33:20.472 --rc geninfo_unexecuted_blocks=1 00:33:20.472 00:33:20.472 ' 00:33:20.472 11:01:07 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:20.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.472 --rc genhtml_branch_coverage=1 00:33:20.472 --rc genhtml_function_coverage=1 00:33:20.472 --rc genhtml_legend=1 00:33:20.472 --rc geninfo_all_blocks=1 00:33:20.472 --rc geninfo_unexecuted_blocks=1 00:33:20.472 00:33:20.472 ' 00:33:20.472 11:01:07 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:20.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.472 --rc genhtml_branch_coverage=1 00:33:20.472 --rc genhtml_function_coverage=1 00:33:20.472 --rc genhtml_legend=1 00:33:20.472 --rc geninfo_all_blocks=1 00:33:20.472 --rc geninfo_unexecuted_blocks=1 00:33:20.472 00:33:20.472 ' 00:33:20.472 11:01:07 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:20.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.472 --rc genhtml_branch_coverage=1 00:33:20.472 --rc genhtml_function_coverage=1 00:33:20.472 --rc genhtml_legend=1 00:33:20.472 --rc geninfo_all_blocks=1 00:33:20.472 --rc geninfo_unexecuted_blocks=1 00:33:20.472 00:33:20.472 ' 00:33:20.472 11:01:07 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:20.472 11:01:07 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:20.472 11:01:07 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:20.472 11:01:07 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:20.472 11:01:07 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:20.472 11:01:07 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:20.472 11:01:07 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:20.472 11:01:07 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:20.472 11:01:07 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:20.472 11:01:07 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:20.472 11:01:07 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:20.472 11:01:07 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:20.472 11:01:07 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:20.472 11:01:07 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:20.472 11:01:07 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:20.472 11:01:07 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:20.472 11:01:07 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:20.472 11:01:07 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:20.472 11:01:07 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:20.472 11:01:07 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:20.472 11:01:07 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.472 11:01:07 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.472 11:01:07 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.472 11:01:07 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:20.473 11:01:07 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.473 11:01:07 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:20.473 11:01:07 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:20.473 11:01:07 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:20.473 11:01:07 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:20.473 11:01:07 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:20.473 11:01:07 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:20.473 11:01:07 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:20.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:20.473 11:01:07 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:20.473 11:01:07 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:20.473 11:01:07 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:20.473 11:01:07 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:20.473 11:01:07 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:20.473 11:01:07 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:20.473 11:01:07 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:20.473 11:01:07 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:20.473 11:01:07 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:20.473 11:01:07 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:20.473 11:01:07 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:20.473 11:01:07 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:20.473 11:01:07 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:20.473 11:01:07 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.473 11:01:07 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:20.473 11:01:07 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:20.473 11:01:07 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:20.473 11:01:07 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:20.473 11:01:07 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:20.473 11:01:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:22.375 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:22.375 11:01:09 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:22.376 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:22.376 Found net devices under 0000:09:00.0: cvl_0_0 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:22.376 Found net devices under 0000:09:00.1: cvl_0_1 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:22.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:22.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:33:22.376 00:33:22.376 --- 10.0.0.2 ping statistics --- 00:33:22.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:22.376 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:22.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:22.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:33:22.376 00:33:22.376 --- 10.0.0.1 ping statistics --- 00:33:22.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:22.376 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:22.376 11:01:09 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:23.751 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:23.751 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:23.751 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:23.751 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:23.751 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:23.751 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:23.751 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:23.751 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:23.751 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:23.751 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:23.751 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:23.751 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:23.751 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:23.751 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:23.751 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:23.751 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:23.751 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:23.751 11:01:11 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:23.751 11:01:11 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:23.751 11:01:11 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:23.751 11:01:11 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:23.751 11:01:11 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:23.751 11:01:11 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:23.751 11:01:11 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:23.751 11:01:11 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:23.751 11:01:11 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:23.751 11:01:11 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:23.751 11:01:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:23.751 11:01:11 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1518673 00:33:23.751 11:01:11 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:23.751 11:01:11 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1518673 00:33:23.751 11:01:11 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1518673 ']' 00:33:23.751 11:01:11 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:23.751 11:01:11 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:23.751 11:01:11 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:23.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:23.751 11:01:11 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:23.751 11:01:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:24.010 [2024-11-19 11:01:11.414217] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:33:24.010 [2024-11-19 11:01:11.414322] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:24.010 [2024-11-19 11:01:11.486257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.010 [2024-11-19 11:01:11.543065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:24.010 [2024-11-19 11:01:11.543133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:24.010 [2024-11-19 11:01:11.543157] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:24.010 [2024-11-19 11:01:11.543168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:24.010 [2024-11-19 11:01:11.543178] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:24.010 [2024-11-19 11:01:11.543807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.269 11:01:11 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:24.269 11:01:11 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:24.269 11:01:11 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:24.269 11:01:11 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:24.269 11:01:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:24.269 11:01:11 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:24.269 11:01:11 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:24.269 11:01:11 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:24.269 11:01:11 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.269 11:01:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:24.269 [2024-11-19 11:01:11.687036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:24.269 11:01:11 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.269 11:01:11 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:24.269 11:01:11 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:24.269 11:01:11 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:24.269 11:01:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:24.269 ************************************ 00:33:24.269 START TEST fio_dif_1_default 00:33:24.269 ************************************ 00:33:24.269 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:24.269 11:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:24.269 11:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:24.269 11:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:24.269 11:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:24.269 11:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:24.269 11:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:24.269 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.269 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:24.269 bdev_null0 00:33:24.269 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.269 11:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:24.269 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.269 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:24.269 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.269 11:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:24.270 [2024-11-19 11:01:11.747382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:24.270 { 00:33:24.270 "params": { 00:33:24.270 "name": "Nvme$subsystem", 00:33:24.270 "trtype": "$TEST_TRANSPORT", 00:33:24.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:24.270 "adrfam": "ipv4", 00:33:24.270 "trsvcid": "$NVMF_PORT", 00:33:24.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:24.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:24.270 "hdgst": ${hdgst:-false}, 00:33:24.270 "ddgst": ${ddgst:-false} 00:33:24.270 }, 00:33:24.270 "method": "bdev_nvme_attach_controller" 00:33:24.270 } 00:33:24.270 EOF 00:33:24.270 )") 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:24.270 "params": { 00:33:24.270 "name": "Nvme0", 00:33:24.270 "trtype": "tcp", 00:33:24.270 "traddr": "10.0.0.2", 00:33:24.270 "adrfam": "ipv4", 00:33:24.270 "trsvcid": "4420", 00:33:24.270 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:24.270 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:24.270 "hdgst": false, 00:33:24.270 "ddgst": false 00:33:24.270 }, 00:33:24.270 "method": "bdev_nvme_attach_controller" 00:33:24.270 }' 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:24.270 11:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:24.530 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:24.530 fio-3.35 00:33:24.530 Starting 1 thread 00:33:36.728 00:33:36.728 filename0: (groupid=0, jobs=1): err= 0: pid=1518927: Tue Nov 19 11:01:22 2024 00:33:36.728 read: IOPS=99, BW=396KiB/s (406kB/s)(3968KiB/10016msec) 00:33:36.728 slat (nsec): min=4026, max=64937, avg=9643.39, stdev=3639.18 00:33:36.728 clat (usec): min=563, max=46668, avg=40354.93, stdev=5106.42 00:33:36.728 lat (usec): min=572, max=46682, avg=40364.57, stdev=5105.86 00:33:36.728 clat percentiles (usec): 00:33:36.728 | 1.00th=[ 594], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:36.728 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:36.728 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:36.728 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:33:36.728 | 99.99th=[46924] 00:33:36.728 bw ( KiB/s): min= 384, max= 448, per=99.71%, avg=395.20, stdev=18.79, samples=20 00:33:36.728 iops : min= 96, max= 112, avg=98.80, stdev= 4.70, samples=20 00:33:36.728 lat (usec) : 750=1.61% 00:33:36.728 lat (msec) : 50=98.39% 00:33:36.728 cpu : usr=90.69%, sys=9.04%, ctx=23, majf=0, minf=299 00:33:36.728 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:36.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.728 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.728 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:36.728 00:33:36.728 Run status group 0 (all jobs): 00:33:36.728 READ: bw=396KiB/s (406kB/s), 396KiB/s-396KiB/s (406kB/s-406kB/s), io=3968KiB (4063kB), run=10016-10016msec 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.728 00:33:36.728 real 0m11.188s 00:33:36.728 user 0m10.173s 00:33:36.728 sys 0m1.198s 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:36.728 ************************************ 00:33:36.728 END TEST fio_dif_1_default 00:33:36.728 ************************************ 00:33:36.728 11:01:22 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:36.728 11:01:22 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:36.728 11:01:22 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:36.728 11:01:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:36.728 ************************************ 00:33:36.728 START TEST fio_dif_1_multi_subsystems 00:33:36.728 ************************************ 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:36.728 bdev_null0 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:36.728 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.729 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:36.729 [2024-11-19 11:01:22.986995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:36.729 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.729 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:36.729 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:36.729 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:36.729 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:36.729 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.729 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:36.729 bdev_null1 00:33:36.729 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.729 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:36.729 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.729 11:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:36.729 { 00:33:36.729 "params": { 00:33:36.729 "name": "Nvme$subsystem", 00:33:36.729 "trtype": "$TEST_TRANSPORT", 00:33:36.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:36.729 "adrfam": "ipv4", 00:33:36.729 "trsvcid": "$NVMF_PORT", 00:33:36.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:36.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:36.729 "hdgst": ${hdgst:-false}, 00:33:36.729 "ddgst": ${ddgst:-false} 00:33:36.729 }, 00:33:36.729 "method": "bdev_nvme_attach_controller" 00:33:36.729 } 00:33:36.729 EOF 00:33:36.729 )") 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:36.729 { 00:33:36.729 "params": { 00:33:36.729 "name": "Nvme$subsystem", 00:33:36.729 "trtype": "$TEST_TRANSPORT", 00:33:36.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:36.729 "adrfam": "ipv4", 00:33:36.729 "trsvcid": "$NVMF_PORT", 00:33:36.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:36.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:36.729 "hdgst": ${hdgst:-false}, 00:33:36.729 "ddgst": ${ddgst:-false} 00:33:36.729 }, 00:33:36.729 "method": "bdev_nvme_attach_controller" 00:33:36.729 } 00:33:36.729 EOF 00:33:36.729 )") 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:36.729 "params": { 00:33:36.729 "name": "Nvme0", 00:33:36.729 "trtype": "tcp", 00:33:36.729 "traddr": "10.0.0.2", 00:33:36.729 "adrfam": "ipv4", 00:33:36.729 "trsvcid": "4420", 00:33:36.729 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:36.729 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:36.729 "hdgst": false, 00:33:36.729 "ddgst": false 00:33:36.729 }, 00:33:36.729 "method": "bdev_nvme_attach_controller" 00:33:36.729 },{ 00:33:36.729 "params": { 00:33:36.729 "name": "Nvme1", 00:33:36.729 "trtype": "tcp", 00:33:36.729 "traddr": "10.0.0.2", 00:33:36.729 "adrfam": "ipv4", 00:33:36.729 "trsvcid": "4420", 00:33:36.729 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:36.729 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:36.729 "hdgst": false, 00:33:36.729 "ddgst": false 00:33:36.729 }, 00:33:36.729 "method": "bdev_nvme_attach_controller" 00:33:36.729 }' 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:36.729 11:01:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:36.730 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:36.730 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:36.730 fio-3.35 00:33:36.730 Starting 2 threads 00:33:46.697 00:33:46.697 filename0: (groupid=0, jobs=1): err= 0: pid=1520380: Tue Nov 19 11:01:34 2024 00:33:46.697 read: IOPS=189, BW=759KiB/s (777kB/s)(7616KiB/10035msec) 00:33:46.697 slat (nsec): min=6846, max=91111, avg=9263.93, stdev=4116.58 00:33:46.697 clat (usec): min=562, max=44391, avg=21052.04, stdev=20344.15 00:33:46.697 lat (usec): min=570, max=44418, avg=21061.30, stdev=20344.29 00:33:46.697 clat percentiles (usec): 00:33:46.697 | 1.00th=[ 586], 5.00th=[ 603], 10.00th=[ 619], 20.00th=[ 676], 00:33:46.697 | 30.00th=[ 750], 40.00th=[ 775], 50.00th=[ 1172], 60.00th=[41157], 00:33:46.697 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:33:46.697 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:33:46.697 | 99.99th=[44303] 00:33:46.697 bw ( KiB/s): min= 704, max= 832, per=65.66%, avg=760.00, stdev=32.63, samples=20 00:33:46.697 iops : min= 176, max= 208, avg=190.00, stdev= 8.16, samples=20 00:33:46.697 lat (usec) : 750=29.15%, 1000=19.91% 00:33:46.697 lat (msec) : 2=0.95%, 50=50.00% 00:33:46.697 cpu : usr=94.98%, sys=4.71%, ctx=15, majf=0, minf=175 00:33:46.697 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:46.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.697 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.697 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:46.697 filename1: (groupid=0, jobs=1): err= 0: pid=1520381: Tue Nov 19 11:01:34 2024 00:33:46.697 read: IOPS=99, BW=399KiB/s (408kB/s)(4000KiB/10035msec) 00:33:46.697 slat (nsec): min=7009, max=26682, avg=8984.58, stdev=2621.35 00:33:46.697 clat (usec): min=599, max=45381, avg=40109.54, stdev=6190.17 00:33:46.697 lat (usec): min=607, max=45408, avg=40118.53, stdev=6190.15 00:33:46.697 clat percentiles (usec): 00:33:46.697 | 1.00th=[ 652], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:46.697 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:46.697 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:33:46.697 | 99.00th=[43254], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:33:46.698 | 99.99th=[45351] 00:33:46.698 bw ( KiB/s): min= 384, max= 448, per=34.38%, avg=398.40, stdev=21.96, samples=20 00:33:46.698 iops : min= 96, max= 112, avg=99.60, stdev= 5.49, samples=20 00:33:46.698 lat (usec) : 750=1.60%, 1000=0.80% 00:33:46.698 lat (msec) : 50=97.60% 00:33:46.698 cpu : usr=94.23%, sys=5.46%, ctx=15, majf=0, minf=148 00:33:46.698 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:46.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.698 issued rwts: total=1000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.698 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:46.698 00:33:46.698 Run status group 0 (all jobs): 00:33:46.698 READ: bw=1158KiB/s (1185kB/s), 399KiB/s-759KiB/s (408kB/s-777kB/s), io=11.3MiB (11.9MB), run=10035-10035msec 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.956 00:33:46.956 real 0m11.594s 00:33:46.956 user 0m20.607s 00:33:46.956 sys 0m1.328s 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:46.956 11:01:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:46.956 ************************************ 00:33:46.956 END TEST fio_dif_1_multi_subsystems 00:33:46.956 ************************************ 00:33:46.956 11:01:34 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:46.956 11:01:34 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:46.956 11:01:34 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:46.956 11:01:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:47.215 ************************************ 00:33:47.215 START TEST fio_dif_rand_params 00:33:47.215 ************************************ 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:47.215 bdev_null0 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:47.215 [2024-11-19 11:01:34.629920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:47.215 { 00:33:47.215 "params": { 00:33:47.215 "name": "Nvme$subsystem", 00:33:47.215 "trtype": "$TEST_TRANSPORT", 00:33:47.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:47.215 "adrfam": "ipv4", 00:33:47.215 "trsvcid": "$NVMF_PORT", 00:33:47.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:47.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:47.215 "hdgst": ${hdgst:-false}, 00:33:47.215 "ddgst": ${ddgst:-false} 00:33:47.215 }, 00:33:47.215 "method": "bdev_nvme_attach_controller" 00:33:47.215 } 00:33:47.215 EOF 00:33:47.215 )") 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:47.215 "params": { 00:33:47.215 "name": "Nvme0", 00:33:47.215 "trtype": "tcp", 00:33:47.215 "traddr": "10.0.0.2", 00:33:47.215 "adrfam": "ipv4", 00:33:47.215 "trsvcid": "4420", 00:33:47.215 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:47.215 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:47.215 "hdgst": false, 00:33:47.215 "ddgst": false 00:33:47.215 }, 00:33:47.215 "method": "bdev_nvme_attach_controller" 00:33:47.215 }' 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:47.215 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:47.216 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:47.216 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:47.216 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:47.216 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:47.216 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:47.216 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:47.216 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:47.216 11:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:47.474 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:47.474 ... 00:33:47.474 fio-3.35 00:33:47.474 Starting 3 threads 00:33:54.084 00:33:54.084 filename0: (groupid=0, jobs=1): err= 0: pid=1521780: Tue Nov 19 11:01:40 2024 00:33:54.084 read: IOPS=246, BW=30.8MiB/s (32.3MB/s)(155MiB/5045msec) 00:33:54.084 slat (nsec): min=4509, max=45663, avg=16199.20, stdev=4546.11 00:33:54.084 clat (usec): min=7257, max=54464, avg=12130.93, stdev=6765.29 00:33:54.084 lat (usec): min=7271, max=54478, avg=12147.13, stdev=6765.09 00:33:54.084 clat percentiles (usec): 00:33:54.084 | 1.00th=[ 7767], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10159], 00:33:54.084 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:33:54.084 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12518], 95.00th=[13042], 00:33:54.084 | 99.00th=[51643], 99.50th=[52167], 99.90th=[54264], 99.95th=[54264], 00:33:54.084 | 99.99th=[54264] 00:33:54.084 bw ( KiB/s): min=26368, max=35072, per=35.98%, avg=31744.00, stdev=3158.49, samples=10 00:33:54.084 iops : min= 206, max= 274, avg=248.00, stdev=24.68, samples=10 00:33:54.084 lat (msec) : 10=16.18%, 20=81.00%, 50=0.16%, 100=2.66% 00:33:54.084 cpu : usr=92.96%, sys=5.77%, ctx=308, majf=0, minf=58 00:33:54.084 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.084 issued rwts: total=1242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.084 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:54.084 filename0: (groupid=0, jobs=1): err= 0: pid=1521781: Tue Nov 19 11:01:40 2024 00:33:54.084 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(132MiB/5004msec) 00:33:54.084 slat (nsec): min=4442, max=43196, avg=16468.23, stdev=3811.64 00:33:54.084 clat (usec): min=5362, max=54601, avg=14193.72, stdev=4854.98 00:33:54.084 lat (usec): min=5373, max=54616, avg=14210.19, stdev=4854.98 00:33:54.084 clat percentiles (usec): 00:33:54.084 | 1.00th=[ 6325], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[11731], 00:33:54.084 | 30.00th=[12780], 40.00th=[13435], 50.00th=[14353], 60.00th=[15008], 00:33:54.084 | 70.00th=[15533], 80.00th=[16057], 90.00th=[16581], 95.00th=[17171], 00:33:54.084 | 99.00th=[51119], 99.50th=[52167], 99.90th=[54789], 99.95th=[54789], 00:33:54.084 | 99.99th=[54789] 00:33:54.084 bw ( KiB/s): min=23808, max=29952, per=30.56%, avg=26956.80, stdev=2158.95, samples=10 00:33:54.084 iops : min= 186, max= 234, avg=210.60, stdev=16.87, samples=10 00:33:54.084 lat (msec) : 10=11.27%, 20=87.59%, 100=1.14% 00:33:54.084 cpu : usr=95.08%, sys=4.46%, ctx=7, majf=0, minf=120 00:33:54.084 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.084 issued rwts: total=1056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.084 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:54.084 filename0: (groupid=0, jobs=1): err= 0: pid=1521782: Tue Nov 19 11:01:40 2024 00:33:54.084 read: IOPS=233, BW=29.2MiB/s (30.6MB/s)(147MiB/5045msec) 00:33:54.084 slat (nsec): min=6720, max=47578, avg=16643.11, stdev=5268.30 00:33:54.084 clat (usec): min=5128, max=50625, avg=12780.19, stdev=3304.47 00:33:54.084 lat (usec): min=5141, max=50662, avg=12796.84, stdev=3304.79 00:33:54.084 clat percentiles (usec): 00:33:54.084 | 1.00th=[ 5669], 5.00th=[ 8160], 10.00th=[ 8848], 20.00th=[11207], 00:33:54.084 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12911], 60.00th=[13304], 00:33:54.084 | 70.00th=[13960], 80.00th=[14746], 90.00th=[15401], 95.00th=[15926], 00:33:54.084 | 99.00th=[17171], 99.50th=[18220], 99.90th=[50070], 99.95th=[50594], 00:33:54.084 | 99.99th=[50594] 00:33:54.084 bw ( KiB/s): min=26112, max=33792, per=34.16%, avg=30131.20, stdev=2490.94, samples=10 00:33:54.084 iops : min= 204, max= 264, avg=235.40, stdev=19.46, samples=10 00:33:54.084 lat (msec) : 10=14.84%, 20=84.73%, 50=0.25%, 100=0.17% 00:33:54.084 cpu : usr=90.40%, sys=7.08%, ctx=324, majf=0, minf=116 00:33:54.084 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.085 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.085 issued rwts: total=1179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.085 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:54.085 00:33:54.085 Run status group 0 (all jobs): 00:33:54.085 READ: bw=86.1MiB/s (90.3MB/s), 26.4MiB/s-30.8MiB/s (27.7MB/s-32.3MB/s), io=435MiB (456MB), run=5004-5045msec 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.085 bdev_null0 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.085 [2024-11-19 11:01:40.820364] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.085 bdev_null1 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.085 bdev_null2 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.085 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:54.086 { 00:33:54.086 "params": { 00:33:54.086 "name": "Nvme$subsystem", 00:33:54.086 "trtype": "$TEST_TRANSPORT", 00:33:54.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:54.086 "adrfam": "ipv4", 00:33:54.086 "trsvcid": "$NVMF_PORT", 00:33:54.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:54.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:54.086 "hdgst": ${hdgst:-false}, 00:33:54.086 "ddgst": ${ddgst:-false} 00:33:54.086 }, 00:33:54.086 "method": "bdev_nvme_attach_controller" 00:33:54.086 } 00:33:54.086 EOF 00:33:54.086 )") 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:54.086 { 00:33:54.086 "params": { 00:33:54.086 "name": "Nvme$subsystem", 00:33:54.086 "trtype": "$TEST_TRANSPORT", 00:33:54.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:54.086 "adrfam": "ipv4", 00:33:54.086 "trsvcid": "$NVMF_PORT", 00:33:54.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:54.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:54.086 "hdgst": ${hdgst:-false}, 00:33:54.086 "ddgst": ${ddgst:-false} 00:33:54.086 }, 00:33:54.086 "method": "bdev_nvme_attach_controller" 00:33:54.086 } 00:33:54.086 EOF 00:33:54.086 )") 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:54.086 { 00:33:54.086 "params": { 00:33:54.086 "name": "Nvme$subsystem", 00:33:54.086 "trtype": "$TEST_TRANSPORT", 00:33:54.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:54.086 "adrfam": "ipv4", 00:33:54.086 "trsvcid": "$NVMF_PORT", 00:33:54.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:54.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:54.086 "hdgst": ${hdgst:-false}, 00:33:54.086 "ddgst": ${ddgst:-false} 00:33:54.086 }, 00:33:54.086 "method": "bdev_nvme_attach_controller" 00:33:54.086 } 00:33:54.086 EOF 00:33:54.086 )") 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:54.086 "params": { 00:33:54.086 "name": "Nvme0", 00:33:54.086 "trtype": "tcp", 00:33:54.086 "traddr": "10.0.0.2", 00:33:54.086 "adrfam": "ipv4", 00:33:54.086 "trsvcid": "4420", 00:33:54.086 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:54.086 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:54.086 "hdgst": false, 00:33:54.086 "ddgst": false 00:33:54.086 }, 00:33:54.086 "method": "bdev_nvme_attach_controller" 00:33:54.086 },{ 00:33:54.086 "params": { 00:33:54.086 "name": "Nvme1", 00:33:54.086 "trtype": "tcp", 00:33:54.086 "traddr": "10.0.0.2", 00:33:54.086 "adrfam": "ipv4", 00:33:54.086 "trsvcid": "4420", 00:33:54.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:54.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:54.086 "hdgst": false, 00:33:54.086 "ddgst": false 00:33:54.086 }, 00:33:54.086 "method": "bdev_nvme_attach_controller" 00:33:54.086 },{ 00:33:54.086 "params": { 00:33:54.086 "name": "Nvme2", 00:33:54.086 "trtype": "tcp", 00:33:54.086 "traddr": "10.0.0.2", 00:33:54.086 "adrfam": "ipv4", 00:33:54.086 "trsvcid": "4420", 00:33:54.086 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:54.086 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:54.086 "hdgst": false, 00:33:54.086 "ddgst": false 00:33:54.086 }, 00:33:54.086 "method": "bdev_nvme_attach_controller" 00:33:54.086 }' 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:54.086 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:54.087 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:54.087 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:54.087 11:01:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:54.087 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:54.087 ... 00:33:54.087 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:54.087 ... 00:33:54.087 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:54.087 ... 00:33:54.087 fio-3.35 00:33:54.087 Starting 24 threads 00:34:06.297 00:34:06.297 filename0: (groupid=0, jobs=1): err= 0: pid=1522608: Tue Nov 19 11:01:52 2024 00:34:06.297 read: IOPS=460, BW=1840KiB/s (1885kB/s)(18.0MiB/10015msec) 00:34:06.297 slat (usec): min=10, max=129, avg=68.97, stdev=22.05 00:34:06.297 clat (msec): min=23, max=191, avg=34.16, stdev=11.17 00:34:06.297 lat (msec): min=23, max=191, avg=34.23, stdev=11.17 00:34:06.297 clat percentiles (msec): 00:34:06.297 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:06.297 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:34:06.297 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:34:06.297 | 99.00th=[ 82], 99.50th=[ 118], 99.90th=[ 190], 99.95th=[ 192], 00:34:06.297 | 99.99th=[ 192] 00:34:06.297 bw ( KiB/s): min= 512, max= 1920, per=4.15%, avg=1836.80, stdev=314.29, samples=20 00:34:06.297 iops : min= 128, max= 480, avg=459.20, stdev=78.57, samples=20 00:34:06.297 lat (msec) : 50=98.61%, 100=0.74%, 250=0.65% 00:34:06.297 cpu : usr=98.19%, sys=1.34%, ctx=42, majf=0, minf=18 00:34:06.297 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:06.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.297 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.297 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.297 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.297 filename0: (groupid=0, jobs=1): err= 0: pid=1522609: Tue Nov 19 11:01:52 2024 00:34:06.297 read: IOPS=458, BW=1836KiB/s (1880kB/s)(17.9MiB/10007msec) 00:34:06.297 slat (usec): min=3, max=116, avg=26.12, stdev= 9.79 00:34:06.297 clat (msec): min=23, max=224, avg=34.62, stdev=15.14 00:34:06.297 lat (msec): min=23, max=224, avg=34.64, stdev=15.13 00:34:06.297 clat percentiles (msec): 00:34:06.297 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:06.297 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:06.297 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:34:06.297 | 99.00th=[ 40], 99.50th=[ 205], 99.90th=[ 224], 99.95th=[ 224], 00:34:06.297 | 99.99th=[ 224] 00:34:06.297 bw ( KiB/s): min= 512, max= 1920, per=4.14%, avg=1832.42, stdev=322.27, samples=19 00:34:06.297 iops : min= 128, max= 480, avg=458.11, stdev=80.57, samples=19 00:34:06.297 lat (msec) : 50=99.30%, 250=0.70% 00:34:06.297 cpu : usr=96.36%, sys=2.20%, ctx=158, majf=0, minf=16 00:34:06.297 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:06.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.297 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.297 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.297 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.297 filename0: (groupid=0, jobs=1): err= 0: pid=1522610: Tue Nov 19 11:01:52 2024 00:34:06.297 read: IOPS=463, BW=1852KiB/s (1897kB/s)(18.1MiB/10021msec) 00:34:06.297 slat (usec): min=3, max=118, avg=28.27, stdev=24.05 00:34:06.297 clat (msec): min=8, max=184, avg=34.31, stdev=11.24 00:34:06.297 lat (msec): min=8, max=184, avg=34.34, stdev=11.24 00:34:06.297 clat percentiles (msec): 00:34:06.297 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:34:06.297 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:06.297 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:34:06.297 | 99.00th=[ 86], 99.50th=[ 133], 99.90th=[ 184], 99.95th=[ 184], 00:34:06.297 | 99.99th=[ 186] 00:34:06.297 bw ( KiB/s): min= 768, max= 1920, per=4.18%, avg=1849.60, stdev=257.60, samples=20 00:34:06.297 iops : min= 192, max= 480, avg=462.40, stdev=64.40, samples=20 00:34:06.297 lat (msec) : 10=0.65%, 20=0.04%, 50=98.28%, 100=0.34%, 250=0.69% 00:34:06.297 cpu : usr=97.72%, sys=1.57%, ctx=59, majf=0, minf=28 00:34:06.297 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:06.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.297 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.297 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.297 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.297 filename0: (groupid=0, jobs=1): err= 0: pid=1522611: Tue Nov 19 11:01:52 2024 00:34:06.298 read: IOPS=458, BW=1835KiB/s (1879kB/s)(17.9MiB/10011msec) 00:34:06.298 slat (usec): min=5, max=100, avg=27.92, stdev=17.13 00:34:06.298 clat (msec): min=23, max=233, avg=34.65, stdev=13.64 00:34:06.298 lat (msec): min=23, max=233, avg=34.68, stdev=13.64 00:34:06.298 clat percentiles (msec): 00:34:06.298 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:34:06.298 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:06.298 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:34:06.298 | 99.00th=[ 82], 99.50th=[ 190], 99.90th=[ 192], 99.95th=[ 207], 00:34:06.298 | 99.99th=[ 234] 00:34:06.298 bw ( KiB/s): min= 512, max= 1920, per=4.14%, avg=1832.42, stdev=322.27, samples=19 00:34:06.298 iops : min= 128, max= 480, avg=458.11, stdev=80.57, samples=19 00:34:06.298 lat (msec) : 50=99.00%, 100=0.30%, 250=0.70% 00:34:06.298 cpu : usr=97.99%, sys=1.49%, ctx=34, majf=0, minf=15 00:34:06.298 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:06.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.298 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.298 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.298 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.298 filename0: (groupid=0, jobs=1): err= 0: pid=1522612: Tue Nov 19 11:01:52 2024 00:34:06.298 read: IOPS=458, BW=1836KiB/s (1880kB/s)(17.9MiB/10006msec) 00:34:06.298 slat (usec): min=10, max=110, avg=35.94, stdev=12.27 00:34:06.298 clat (msec): min=16, max=417, avg=34.54, stdev=19.88 00:34:06.298 lat (msec): min=16, max=417, avg=34.58, stdev=19.88 00:34:06.298 clat percentiles (msec): 00:34:06.298 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:06.298 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:06.298 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:34:06.298 | 99.00th=[ 39], 99.50th=[ 82], 99.90th=[ 359], 99.95th=[ 359], 00:34:06.298 | 99.99th=[ 418] 00:34:06.298 bw ( KiB/s): min= 384, max= 1920, per=4.13%, avg=1825.68, stdev=351.43, samples=19 00:34:06.298 iops : min= 96, max= 480, avg=456.42, stdev=87.86, samples=19 00:34:06.298 lat (msec) : 20=0.35%, 50=99.00%, 100=0.30%, 500=0.35% 00:34:06.298 cpu : usr=95.75%, sys=2.46%, ctx=615, majf=0, minf=25 00:34:06.298 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:06.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.298 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.298 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.298 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.298 filename0: (groupid=0, jobs=1): err= 0: pid=1522613: Tue Nov 19 11:01:52 2024 00:34:06.298 read: IOPS=458, BW=1835KiB/s (1879kB/s)(17.9MiB/10007msec) 00:34:06.298 slat (usec): min=8, max=115, avg=27.30, stdev=16.46 00:34:06.298 clat (msec): min=12, max=416, avg=34.69, stdev=22.64 00:34:06.298 lat (msec): min=12, max=416, avg=34.72, stdev=22.64 00:34:06.298 clat percentiles (msec): 00:34:06.298 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:34:06.298 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:06.298 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:34:06.298 | 99.00th=[ 36], 99.50th=[ 41], 99.90th=[ 418], 99.95th=[ 418], 00:34:06.298 | 99.99th=[ 418] 00:34:06.298 bw ( KiB/s): min= 384, max= 2032, per=4.13%, avg=1824.84, stdev=352.91, samples=19 00:34:06.298 iops : min= 96, max= 508, avg=456.21, stdev=88.23, samples=19 00:34:06.298 lat (msec) : 20=0.31%, 50=99.30%, 100=0.04%, 500=0.35% 00:34:06.298 cpu : usr=97.06%, sys=1.93%, ctx=147, majf=0, minf=27 00:34:06.298 IO depths : 1=0.1%, 2=6.4%, 4=25.0%, 8=56.1%, 16=12.4%, 32=0.0%, >=64=0.0% 00:34:06.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.298 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.298 issued rwts: total=4590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.298 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.298 filename0: (groupid=0, jobs=1): err= 0: pid=1522615: Tue Nov 19 11:01:52 2024 00:34:06.298 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.0MiB/10006msec) 00:34:06.298 slat (usec): min=7, max=110, avg=32.00, stdev=14.62 00:34:06.298 clat (msec): min=24, max=181, avg=34.48, stdev=10.29 00:34:06.298 lat (msec): min=24, max=181, avg=34.51, stdev=10.29 00:34:06.298 clat percentiles (msec): 00:34:06.298 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:06.298 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:06.298 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:34:06.298 | 99.00th=[ 99], 99.50th=[ 140], 99.90th=[ 146], 99.95th=[ 146], 00:34:06.298 | 99.99th=[ 182] 00:34:06.298 bw ( KiB/s): min= 640, max= 1920, per=4.16%, avg=1839.16, stdev=293.16, samples=19 00:34:06.298 iops : min= 160, max= 480, avg=459.79, stdev=73.29, samples=19 00:34:06.298 lat (msec) : 50=98.65%, 100=0.65%, 250=0.69% 00:34:06.298 cpu : usr=98.31%, sys=1.30%, ctx=37, majf=0, minf=22 00:34:06.298 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:06.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.298 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.298 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.298 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.298 filename0: (groupid=0, jobs=1): err= 0: pid=1522616: Tue Nov 19 11:01:52 2024 00:34:06.298 read: IOPS=459, BW=1836KiB/s (1880kB/s)(17.9MiB/10003msec) 00:34:06.298 slat (nsec): min=8708, max=81829, avg=34477.47, stdev=11100.33 00:34:06.298 clat (msec): min=21, max=298, avg=34.54, stdev=16.71 00:34:06.298 lat (msec): min=21, max=298, avg=34.58, stdev=16.71 00:34:06.298 clat percentiles (msec): 00:34:06.298 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:06.298 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:06.298 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:34:06.298 | 99.00th=[ 41], 99.50th=[ 133], 99.90th=[ 300], 99.95th=[ 300], 00:34:06.298 | 99.99th=[ 300] 00:34:06.298 bw ( KiB/s): min= 384, max= 1920, per=4.14%, avg=1832.42, stdev=351.97, samples=19 00:34:06.298 iops : min= 96, max= 480, avg=458.11, stdev=87.99, samples=19 00:34:06.298 lat (msec) : 50=99.30%, 250=0.35%, 500=0.35% 00:34:06.298 cpu : usr=98.18%, sys=1.38%, ctx=33, majf=0, minf=31 00:34:06.298 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:06.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.298 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.298 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.298 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.298 filename1: (groupid=0, jobs=1): err= 0: pid=1522617: Tue Nov 19 11:01:52 2024 00:34:06.298 read: IOPS=500, BW=2002KiB/s (2050kB/s)(19.6MiB/10028msec) 00:34:06.298 slat (usec): min=3, max=129, avg=32.96, stdev=22.64 00:34:06.298 clat (msec): min=7, max=138, avg=31.74, stdev= 9.62 00:34:06.298 lat (msec): min=7, max=138, avg=31.77, stdev= 9.62 00:34:06.298 clat percentiles (msec): 00:34:06.298 | 1.00th=[ 22], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 24], 00:34:06.298 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:06.298 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:34:06.298 | 99.00th=[ 72], 99.50th=[ 114], 99.90th=[ 138], 99.95th=[ 138], 00:34:06.298 | 99.99th=[ 140] 00:34:06.298 bw ( KiB/s): min= 1002, max= 2816, per=4.53%, avg=2001.30, stdev=385.51, samples=20 00:34:06.298 iops : min= 250, max= 704, avg=500.30, stdev=96.45, samples=20 00:34:06.298 lat (msec) : 10=0.60%, 20=0.18%, 50=97.75%, 100=0.96%, 250=0.52% 00:34:06.298 cpu : usr=97.73%, sys=1.67%, ctx=84, majf=0, minf=42 00:34:06.298 IO depths : 1=1.3%, 2=6.0%, 4=20.1%, 8=61.3%, 16=11.2%, 32=0.0%, >=64=0.0% 00:34:06.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.298 complete : 0=0.0%, 4=92.9%, 8=1.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.298 issued rwts: total=5019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.298 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.298 filename1: (groupid=0, jobs=1): err= 0: pid=1522618: Tue Nov 19 11:01:52 2024 00:34:06.298 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.0MiB/10006msec) 00:34:06.298 slat (nsec): min=6188, max=91726, avg=33556.68, stdev=12870.71 00:34:06.298 clat (msec): min=24, max=244, avg=34.44, stdev=11.38 00:34:06.298 lat (msec): min=24, max=244, avg=34.48, stdev=11.38 00:34:06.298 clat percentiles (msec): 00:34:06.298 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:06.299 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:06.299 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:34:06.299 | 99.00th=[ 75], 99.50th=[ 133], 99.90th=[ 186], 99.95th=[ 186], 00:34:06.299 | 99.99th=[ 245] 00:34:06.299 bw ( KiB/s): min= 640, max= 1920, per=4.16%, avg=1839.16, stdev=293.16, samples=19 00:34:06.299 iops : min= 160, max= 480, avg=459.79, stdev=73.29, samples=19 00:34:06.299 lat (msec) : 50=98.61%, 100=0.74%, 250=0.65% 00:34:06.299 cpu : usr=97.74%, sys=1.54%, ctx=106, majf=0, minf=19 00:34:06.299 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:06.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.299 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.299 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.299 filename1: (groupid=0, jobs=1): err= 0: pid=1522619: Tue Nov 19 11:01:52 2024 00:34:06.299 read: IOPS=459, BW=1838KiB/s (1883kB/s)(18.0MiB/10004msec) 00:34:06.299 slat (nsec): min=7964, max=92094, avg=20746.65, stdev=13734.97 00:34:06.299 clat (msec): min=21, max=306, avg=34.63, stdev=15.42 00:34:06.299 lat (msec): min=21, max=306, avg=34.65, stdev=15.42 00:34:06.299 clat percentiles (msec): 00:34:06.299 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:34:06.299 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:06.299 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:34:06.299 | 99.00th=[ 49], 99.50th=[ 205], 99.90th=[ 222], 99.95th=[ 222], 00:34:06.299 | 99.99th=[ 309] 00:34:06.299 bw ( KiB/s): min= 512, max= 1968, per=4.15%, avg=1834.95, stdev=323.19, samples=19 00:34:06.299 iops : min= 128, max= 492, avg=458.74, stdev=80.80, samples=19 00:34:06.299 lat (msec) : 50=99.30%, 250=0.65%, 500=0.04% 00:34:06.299 cpu : usr=96.96%, sys=2.03%, ctx=132, majf=0, minf=21 00:34:06.299 IO depths : 1=4.7%, 2=10.8%, 4=24.7%, 8=52.0%, 16=7.9%, 32=0.0%, >=64=0.0% 00:34:06.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.299 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.299 issued rwts: total=4598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.299 filename1: (groupid=0, jobs=1): err= 0: pid=1522620: Tue Nov 19 11:01:52 2024 00:34:06.299 read: IOPS=458, BW=1836KiB/s (1880kB/s)(17.9MiB/10005msec) 00:34:06.299 slat (nsec): min=8202, max=94624, avg=34666.47, stdev=11945.11 00:34:06.299 clat (msec): min=16, max=359, avg=34.57, stdev=19.44 00:34:06.299 lat (msec): min=16, max=359, avg=34.60, stdev=19.44 00:34:06.299 clat percentiles (msec): 00:34:06.299 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:06.299 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:06.299 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:34:06.299 | 99.00th=[ 39], 99.50th=[ 82], 99.90th=[ 359], 99.95th=[ 359], 00:34:06.299 | 99.99th=[ 359] 00:34:06.299 bw ( KiB/s): min= 384, max= 1920, per=4.13%, avg=1825.68, stdev=351.15, samples=19 00:34:06.299 iops : min= 96, max= 480, avg=456.42, stdev=87.79, samples=19 00:34:06.299 lat (msec) : 20=0.35%, 50=98.95%, 100=0.35%, 500=0.35% 00:34:06.299 cpu : usr=98.39%, sys=1.20%, ctx=23, majf=0, minf=18 00:34:06.299 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:06.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.299 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.299 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.299 filename1: (groupid=0, jobs=1): err= 0: pid=1522621: Tue Nov 19 11:01:52 2024 00:34:06.299 read: IOPS=459, BW=1837KiB/s (1881kB/s)(17.9MiB/10001msec) 00:34:06.299 slat (nsec): min=9939, max=92895, avg=36703.43, stdev=12492.44 00:34:06.299 clat (msec): min=21, max=297, avg=34.51, stdev=16.60 00:34:06.299 lat (msec): min=21, max=297, avg=34.55, stdev=16.60 00:34:06.299 clat percentiles (msec): 00:34:06.299 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:06.299 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:06.299 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:34:06.299 | 99.00th=[ 41], 99.50th=[ 132], 99.90th=[ 296], 99.95th=[ 296], 00:34:06.299 | 99.99th=[ 296] 00:34:06.299 bw ( KiB/s): min= 512, max= 1920, per=4.14%, avg=1832.42, stdev=322.27, samples=19 00:34:06.299 iops : min= 128, max= 480, avg=458.11, stdev=80.57, samples=19 00:34:06.299 lat (msec) : 50=99.30%, 250=0.35%, 500=0.35% 00:34:06.299 cpu : usr=97.64%, sys=1.54%, ctx=129, majf=0, minf=21 00:34:06.299 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:06.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.299 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.299 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.299 filename1: (groupid=0, jobs=1): err= 0: pid=1522623: Tue Nov 19 11:01:52 2024 00:34:06.299 read: IOPS=460, BW=1841KiB/s (1885kB/s)(18.0MiB/10013msec) 00:34:06.299 slat (usec): min=8, max=133, avg=61.50, stdev=28.63 00:34:06.299 clat (msec): min=12, max=217, avg=34.22, stdev=14.89 00:34:06.299 lat (msec): min=12, max=217, avg=34.28, stdev=14.89 00:34:06.299 clat percentiles (msec): 00:34:06.299 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:06.299 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:34:06.299 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:34:06.299 | 99.00th=[ 41], 99.50th=[ 205], 99.90th=[ 218], 99.95th=[ 218], 00:34:06.299 | 99.99th=[ 218] 00:34:06.299 bw ( KiB/s): min= 512, max= 1920, per=4.14%, avg=1832.42, stdev=322.27, samples=19 00:34:06.299 iops : min= 128, max= 480, avg=458.11, stdev=80.57, samples=19 00:34:06.299 lat (msec) : 20=0.35%, 50=98.96%, 250=0.69% 00:34:06.299 cpu : usr=97.77%, sys=1.49%, ctx=98, majf=0, minf=28 00:34:06.299 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:06.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.299 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.299 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.299 filename1: (groupid=0, jobs=1): err= 0: pid=1522624: Tue Nov 19 11:01:52 2024 00:34:06.299 read: IOPS=458, BW=1836KiB/s (1880kB/s)(17.9MiB/10005msec) 00:34:06.299 slat (usec): min=8, max=111, avg=43.32, stdev=18.41 00:34:06.299 clat (msec): min=15, max=358, avg=34.47, stdev=19.40 00:34:06.299 lat (msec): min=15, max=358, avg=34.51, stdev=19.40 00:34:06.299 clat percentiles (msec): 00:34:06.299 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:06.299 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:06.299 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:34:06.299 | 99.00th=[ 39], 99.50th=[ 82], 99.90th=[ 359], 99.95th=[ 359], 00:34:06.299 | 99.99th=[ 359] 00:34:06.299 bw ( KiB/s): min= 384, max= 1920, per=4.13%, avg=1825.68, stdev=351.43, samples=19 00:34:06.299 iops : min= 96, max= 480, avg=456.42, stdev=87.86, samples=19 00:34:06.299 lat (msec) : 20=0.35%, 50=98.95%, 100=0.35%, 500=0.35% 00:34:06.299 cpu : usr=96.92%, sys=2.04%, ctx=246, majf=0, minf=22 00:34:06.299 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:06.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.299 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.299 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.299 filename1: (groupid=0, jobs=1): err= 0: pid=1522625: Tue Nov 19 11:01:52 2024 00:34:06.299 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.0MiB/10006msec) 00:34:06.299 slat (nsec): min=8573, max=79478, avg=34306.91, stdev=10065.16 00:34:06.299 clat (msec): min=32, max=191, avg=34.44, stdev=11.00 00:34:06.299 lat (msec): min=32, max=191, avg=34.47, stdev=11.00 00:34:06.299 clat percentiles (msec): 00:34:06.299 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:06.299 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:06.299 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:34:06.299 | 99.00th=[ 75], 99.50th=[ 118], 99.90th=[ 192], 99.95th=[ 192], 00:34:06.299 | 99.99th=[ 192] 00:34:06.299 bw ( KiB/s): min= 640, max= 1920, per=4.16%, avg=1839.16, stdev=293.16, samples=19 00:34:06.300 iops : min= 160, max= 480, avg=459.79, stdev=73.29, samples=19 00:34:06.300 lat (msec) : 50=98.61%, 100=0.74%, 250=0.65% 00:34:06.300 cpu : usr=96.14%, sys=2.28%, ctx=324, majf=0, minf=29 00:34:06.300 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:06.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.300 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.300 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.300 filename2: (groupid=0, jobs=1): err= 0: pid=1522626: Tue Nov 19 11:01:52 2024 00:34:06.300 read: IOPS=458, BW=1836KiB/s (1880kB/s)(17.9MiB/10006msec) 00:34:06.300 slat (usec): min=6, max=117, avg=44.90, stdev=20.77 00:34:06.300 clat (msec): min=15, max=358, avg=34.46, stdev=19.43 00:34:06.300 lat (msec): min=15, max=358, avg=34.50, stdev=19.43 00:34:06.300 clat percentiles (msec): 00:34:06.300 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:06.300 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:34:06.300 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:34:06.300 | 99.00th=[ 38], 99.50th=[ 82], 99.90th=[ 359], 99.95th=[ 359], 00:34:06.300 | 99.99th=[ 359] 00:34:06.300 bw ( KiB/s): min= 384, max= 1920, per=4.13%, avg=1825.68, stdev=351.43, samples=19 00:34:06.300 iops : min= 96, max= 480, avg=456.42, stdev=87.86, samples=19 00:34:06.300 lat (msec) : 20=0.35%, 50=98.95%, 100=0.35%, 500=0.35% 00:34:06.300 cpu : usr=97.28%, sys=1.56%, ctx=108, majf=0, minf=20 00:34:06.300 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:06.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.300 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.300 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.300 filename2: (groupid=0, jobs=1): err= 0: pid=1522627: Tue Nov 19 11:01:52 2024 00:34:06.300 read: IOPS=459, BW=1836KiB/s (1880kB/s)(17.9MiB/10004msec) 00:34:06.300 slat (nsec): min=8709, max=64346, avg=25562.69, stdev=7193.29 00:34:06.300 clat (msec): min=22, max=306, avg=34.61, stdev=15.28 00:34:06.300 lat (msec): min=22, max=306, avg=34.64, stdev=15.28 00:34:06.300 clat percentiles (msec): 00:34:06.300 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:06.300 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:06.300 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:34:06.300 | 99.00th=[ 40], 99.50th=[ 205], 99.90th=[ 222], 99.95th=[ 222], 00:34:06.300 | 99.99th=[ 309] 00:34:06.300 bw ( KiB/s): min= 512, max= 1920, per=4.14%, avg=1832.42, stdev=322.27, samples=19 00:34:06.300 iops : min= 128, max= 480, avg=458.11, stdev=80.57, samples=19 00:34:06.300 lat (msec) : 50=99.30%, 250=0.65%, 500=0.04% 00:34:06.300 cpu : usr=97.19%, sys=1.79%, ctx=124, majf=0, minf=28 00:34:06.300 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:06.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.300 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.300 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.300 filename2: (groupid=0, jobs=1): err= 0: pid=1522628: Tue Nov 19 11:01:52 2024 00:34:06.300 read: IOPS=459, BW=1836KiB/s (1880kB/s)(17.9MiB/10004msec) 00:34:06.300 slat (usec): min=8, max=131, avg=73.00, stdev=20.04 00:34:06.300 clat (msec): min=23, max=221, avg=34.21, stdev=15.04 00:34:06.300 lat (msec): min=23, max=221, avg=34.28, stdev=15.04 00:34:06.300 clat percentiles (msec): 00:34:06.300 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:06.300 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:34:06.300 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:34:06.300 | 99.00th=[ 40], 99.50th=[ 205], 99.90th=[ 222], 99.95th=[ 222], 00:34:06.300 | 99.99th=[ 222] 00:34:06.300 bw ( KiB/s): min= 512, max= 1920, per=4.14%, avg=1832.42, stdev=322.27, samples=19 00:34:06.300 iops : min= 128, max= 480, avg=458.11, stdev=80.57, samples=19 00:34:06.300 lat (msec) : 50=99.30%, 250=0.70% 00:34:06.300 cpu : usr=96.44%, sys=2.18%, ctx=143, majf=0, minf=20 00:34:06.300 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:06.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.300 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.300 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.300 filename2: (groupid=0, jobs=1): err= 0: pid=1522629: Tue Nov 19 11:01:52 2024 00:34:06.300 read: IOPS=458, BW=1836KiB/s (1880kB/s)(17.9MiB/10007msec) 00:34:06.300 slat (nsec): min=7850, max=76288, avg=34148.26, stdev=10177.45 00:34:06.300 clat (msec): min=16, max=360, avg=34.56, stdev=19.55 00:34:06.300 lat (msec): min=16, max=360, avg=34.60, stdev=19.55 00:34:06.300 clat percentiles (msec): 00:34:06.300 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:06.300 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:06.300 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:34:06.300 | 99.00th=[ 39], 99.50th=[ 82], 99.90th=[ 363], 99.95th=[ 363], 00:34:06.300 | 99.99th=[ 363] 00:34:06.300 bw ( KiB/s): min= 384, max= 1920, per=4.13%, avg=1825.68, stdev=351.43, samples=19 00:34:06.300 iops : min= 96, max= 480, avg=456.42, stdev=87.86, samples=19 00:34:06.300 lat (msec) : 20=0.35%, 50=98.95%, 100=0.35%, 500=0.35% 00:34:06.300 cpu : usr=98.01%, sys=1.48%, ctx=40, majf=0, minf=25 00:34:06.300 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:06.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.300 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.300 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.300 filename2: (groupid=0, jobs=1): err= 0: pid=1522630: Tue Nov 19 11:01:52 2024 00:34:06.300 read: IOPS=463, BW=1853KiB/s (1898kB/s)(18.1MiB/10014msec) 00:34:06.300 slat (usec): min=4, max=119, avg=29.80, stdev=23.73 00:34:06.300 clat (msec): min=9, max=205, avg=34.25, stdev=12.30 00:34:06.300 lat (msec): min=9, max=205, avg=34.28, stdev=12.30 00:34:06.300 clat percentiles (msec): 00:34:06.300 | 1.00th=[ 18], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:06.300 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:06.300 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:34:06.300 | 99.00th=[ 77], 99.50th=[ 140], 99.90th=[ 207], 99.95th=[ 207], 00:34:06.300 | 99.99th=[ 207] 00:34:06.300 bw ( KiB/s): min= 896, max= 1923, per=4.18%, avg=1850.50, stdev=229.37, samples=20 00:34:06.300 iops : min= 224, max= 480, avg=462.40, stdev=57.31, samples=20 00:34:06.300 lat (msec) : 10=0.34%, 20=0.69%, 50=97.93%, 100=0.34%, 250=0.69% 00:34:06.300 cpu : usr=96.66%, sys=2.11%, ctx=190, majf=0, minf=32 00:34:06.300 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:06.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.300 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.300 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.300 filename2: (groupid=0, jobs=1): err= 0: pid=1522631: Tue Nov 19 11:01:52 2024 00:34:06.300 read: IOPS=459, BW=1836KiB/s (1881kB/s)(17.9MiB/10002msec) 00:34:06.300 slat (nsec): min=8309, max=96235, avg=33710.98, stdev=11026.26 00:34:06.300 clat (msec): min=21, max=359, avg=34.53, stdev=17.02 00:34:06.300 lat (msec): min=21, max=359, avg=34.57, stdev=17.02 00:34:06.300 clat percentiles (msec): 00:34:06.300 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:06.300 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:06.300 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:34:06.300 | 99.00th=[ 41], 99.50th=[ 133], 99.90th=[ 300], 99.95th=[ 300], 00:34:06.300 | 99.99th=[ 359] 00:34:06.300 bw ( KiB/s): min= 384, max= 1920, per=4.14%, avg=1832.42, stdev=351.97, samples=19 00:34:06.300 iops : min= 96, max= 480, avg=458.11, stdev=87.99, samples=19 00:34:06.300 lat (msec) : 50=99.30%, 100=0.04%, 250=0.30%, 500=0.35% 00:34:06.300 cpu : usr=97.82%, sys=1.47%, ctx=68, majf=0, minf=20 00:34:06.300 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:06.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.300 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.301 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.301 filename2: (groupid=0, jobs=1): err= 0: pid=1522633: Tue Nov 19 11:01:52 2024 00:34:06.301 read: IOPS=460, BW=1841KiB/s (1885kB/s)(18.0MiB/10014msec) 00:34:06.301 slat (usec): min=5, max=118, avg=46.02, stdev=29.80 00:34:06.301 clat (msec): min=21, max=191, avg=34.37, stdev=11.11 00:34:06.301 lat (msec): min=21, max=191, avg=34.42, stdev=11.11 00:34:06.301 clat percentiles (msec): 00:34:06.301 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:06.301 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:06.301 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:34:06.301 | 99.00th=[ 83], 99.50th=[ 118], 99.90th=[ 192], 99.95th=[ 192], 00:34:06.301 | 99.99th=[ 192] 00:34:06.301 bw ( KiB/s): min= 513, max= 1920, per=4.15%, avg=1836.85, stdev=314.07, samples=20 00:34:06.301 iops : min= 128, max= 480, avg=459.20, stdev=78.57, samples=20 00:34:06.301 lat (msec) : 50=98.61%, 100=0.69%, 250=0.69% 00:34:06.301 cpu : usr=97.54%, sys=1.66%, ctx=33, majf=0, minf=26 00:34:06.301 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:06.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.301 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.301 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.301 filename2: (groupid=0, jobs=1): err= 0: pid=1522634: Tue Nov 19 11:01:52 2024 00:34:06.301 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.0MiB/10006msec) 00:34:06.301 slat (usec): min=10, max=116, avg=41.33, stdev=19.26 00:34:06.301 clat (msec): min=24, max=191, avg=34.37, stdev=11.03 00:34:06.301 lat (msec): min=24, max=191, avg=34.41, stdev=11.03 00:34:06.301 clat percentiles (msec): 00:34:06.301 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:06.301 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:34:06.301 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:34:06.301 | 99.00th=[ 75], 99.50th=[ 118], 99.90th=[ 192], 99.95th=[ 192], 00:34:06.301 | 99.99th=[ 192] 00:34:06.301 bw ( KiB/s): min= 624, max= 1920, per=4.16%, avg=1839.16, stdev=296.68, samples=19 00:34:06.301 iops : min= 156, max= 480, avg=459.79, stdev=74.17, samples=19 00:34:06.301 lat (msec) : 50=98.61%, 100=0.69%, 250=0.69% 00:34:06.301 cpu : usr=98.31%, sys=1.28%, ctx=20, majf=0, minf=24 00:34:06.301 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:06.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.301 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.301 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.301 00:34:06.301 Run status group 0 (all jobs): 00:34:06.301 READ: bw=43.2MiB/s (45.3MB/s), 1835KiB/s-2002KiB/s (1879kB/s-2050kB/s), io=433MiB (454MB), run=10001-10028msec 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.301 bdev_null0 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.301 [2024-11-19 11:01:52.667802] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:06.301 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.302 bdev_null1 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:06.302 { 00:34:06.302 "params": { 00:34:06.302 "name": "Nvme$subsystem", 00:34:06.302 "trtype": "$TEST_TRANSPORT", 00:34:06.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:06.302 "adrfam": "ipv4", 00:34:06.302 "trsvcid": "$NVMF_PORT", 00:34:06.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:06.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:06.302 "hdgst": ${hdgst:-false}, 00:34:06.302 "ddgst": ${ddgst:-false} 00:34:06.302 }, 00:34:06.302 "method": "bdev_nvme_attach_controller" 00:34:06.302 } 00:34:06.302 EOF 00:34:06.302 )") 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:06.302 { 00:34:06.302 "params": { 00:34:06.302 "name": "Nvme$subsystem", 00:34:06.302 "trtype": "$TEST_TRANSPORT", 00:34:06.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:06.302 "adrfam": "ipv4", 00:34:06.302 "trsvcid": "$NVMF_PORT", 00:34:06.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:06.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:06.302 "hdgst": ${hdgst:-false}, 00:34:06.302 "ddgst": ${ddgst:-false} 00:34:06.302 }, 00:34:06.302 "method": "bdev_nvme_attach_controller" 00:34:06.302 } 00:34:06.302 EOF 00:34:06.302 )") 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:06.302 "params": { 00:34:06.302 "name": "Nvme0", 00:34:06.302 "trtype": "tcp", 00:34:06.302 "traddr": "10.0.0.2", 00:34:06.302 "adrfam": "ipv4", 00:34:06.302 "trsvcid": "4420", 00:34:06.302 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:06.302 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:06.302 "hdgst": false, 00:34:06.302 "ddgst": false 00:34:06.302 }, 00:34:06.302 "method": "bdev_nvme_attach_controller" 00:34:06.302 },{ 00:34:06.302 "params": { 00:34:06.302 "name": "Nvme1", 00:34:06.302 "trtype": "tcp", 00:34:06.302 "traddr": "10.0.0.2", 00:34:06.302 "adrfam": "ipv4", 00:34:06.302 "trsvcid": "4420", 00:34:06.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:06.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:06.302 "hdgst": false, 00:34:06.302 "ddgst": false 00:34:06.302 }, 00:34:06.302 "method": "bdev_nvme_attach_controller" 00:34:06.302 }' 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:06.302 11:01:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:06.302 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:06.302 ... 00:34:06.302 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:06.302 ... 00:34:06.302 fio-3.35 00:34:06.302 Starting 4 threads 00:34:11.566 00:34:11.566 filename0: (groupid=0, jobs=1): err= 0: pid=1524052: Tue Nov 19 11:01:58 2024 00:34:11.566 read: IOPS=1897, BW=14.8MiB/s (15.5MB/s)(74.1MiB/5002msec) 00:34:11.566 slat (usec): min=4, max=104, avg=19.91, stdev=10.58 00:34:11.566 clat (usec): min=923, max=7616, avg=4145.71, stdev=603.92 00:34:11.566 lat (usec): min=942, max=7645, avg=4165.62, stdev=604.77 00:34:11.566 clat percentiles (usec): 00:34:11.566 | 1.00th=[ 1975], 5.00th=[ 3261], 10.00th=[ 3523], 20.00th=[ 3884], 00:34:11.566 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:34:11.566 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 5014], 00:34:11.566 | 99.00th=[ 6259], 99.50th=[ 6652], 99.90th=[ 7177], 99.95th=[ 7373], 00:34:11.566 | 99.99th=[ 7635] 00:34:11.566 bw ( KiB/s): min=14832, max=16272, per=25.40%, avg=15208.89, stdev=444.35, samples=9 00:34:11.566 iops : min= 1854, max= 2034, avg=1901.11, stdev=55.54, samples=9 00:34:11.566 lat (usec) : 1000=0.02% 00:34:11.566 lat (msec) : 2=1.01%, 4=25.57%, 10=73.40% 00:34:11.566 cpu : usr=90.22%, sys=6.54%, ctx=297, majf=0, minf=0 00:34:11.566 IO depths : 1=0.6%, 2=15.9%, 4=56.8%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.566 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.566 issued rwts: total=9491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.566 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:11.566 filename0: (groupid=0, jobs=1): err= 0: pid=1524053: Tue Nov 19 11:01:58 2024 00:34:11.566 read: IOPS=1830, BW=14.3MiB/s (15.0MB/s)(71.5MiB/5001msec) 00:34:11.567 slat (nsec): min=6799, max=65429, avg=17392.73, stdev=10222.65 00:34:11.567 clat (usec): min=713, max=9770, avg=4309.70, stdev=660.34 00:34:11.567 lat (usec): min=732, max=9787, avg=4327.09, stdev=659.65 00:34:11.567 clat percentiles (usec): 00:34:11.567 | 1.00th=[ 2638], 5.00th=[ 3458], 10.00th=[ 3752], 20.00th=[ 4015], 00:34:11.567 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:34:11.567 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 5014], 95.00th=[ 5604], 00:34:11.567 | 99.00th=[ 6849], 99.50th=[ 7111], 99.90th=[ 7570], 99.95th=[ 7767], 00:34:11.567 | 99.99th=[ 9765] 00:34:11.567 bw ( KiB/s): min=13979, max=15504, per=24.44%, avg=14630.56, stdev=448.41, samples=9 00:34:11.567 iops : min= 1747, max= 1938, avg=1828.78, stdev=56.12, samples=9 00:34:11.567 lat (usec) : 750=0.01%, 1000=0.02% 00:34:11.567 lat (msec) : 2=0.42%, 4=19.26%, 10=80.29% 00:34:11.567 cpu : usr=95.46%, sys=4.06%, ctx=9, majf=0, minf=9 00:34:11.567 IO depths : 1=0.3%, 2=13.0%, 4=59.2%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.567 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.567 issued rwts: total=9155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.567 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:11.567 filename1: (groupid=0, jobs=1): err= 0: pid=1524054: Tue Nov 19 11:01:58 2024 00:34:11.567 read: IOPS=1908, BW=14.9MiB/s (15.6MB/s)(74.6MiB/5002msec) 00:34:11.567 slat (nsec): min=6061, max=64737, avg=15137.00, stdev=9295.78 00:34:11.567 clat (usec): min=654, max=7695, avg=4139.43, stdev=585.43 00:34:11.567 lat (usec): min=663, max=7716, avg=4154.56, stdev=586.17 00:34:11.567 clat percentiles (usec): 00:34:11.567 | 1.00th=[ 2343], 5.00th=[ 3294], 10.00th=[ 3523], 20.00th=[ 3818], 00:34:11.567 | 30.00th=[ 4015], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:34:11.567 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4621], 95.00th=[ 4948], 00:34:11.567 | 99.00th=[ 6259], 99.50th=[ 6587], 99.90th=[ 7373], 99.95th=[ 7570], 00:34:11.567 | 99.99th=[ 7701] 00:34:11.567 bw ( KiB/s): min=14912, max=15872, per=25.52%, avg=15276.44, stdev=314.23, samples=9 00:34:11.567 iops : min= 1864, max= 1984, avg=1909.56, stdev=39.28, samples=9 00:34:11.567 lat (usec) : 750=0.02%, 1000=0.02% 00:34:11.567 lat (msec) : 2=0.54%, 4=29.30%, 10=70.11% 00:34:11.567 cpu : usr=95.22%, sys=4.28%, ctx=8, majf=0, minf=9 00:34:11.567 IO depths : 1=0.5%, 2=13.8%, 4=58.3%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.567 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.567 issued rwts: total=9545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.567 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:11.567 filename1: (groupid=0, jobs=1): err= 0: pid=1524055: Tue Nov 19 11:01:58 2024 00:34:11.567 read: IOPS=1848, BW=14.4MiB/s (15.1MB/s)(72.2MiB/5001msec) 00:34:11.567 slat (nsec): min=6342, max=70725, avg=17791.72, stdev=10433.02 00:34:11.567 clat (usec): min=795, max=8059, avg=4265.80, stdev=668.56 00:34:11.567 lat (usec): min=808, max=8080, avg=4283.59, stdev=668.47 00:34:11.567 clat percentiles (usec): 00:34:11.567 | 1.00th=[ 2278], 5.00th=[ 3425], 10.00th=[ 3687], 20.00th=[ 3982], 00:34:11.567 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:34:11.567 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4948], 95.00th=[ 5407], 00:34:11.567 | 99.00th=[ 6915], 99.50th=[ 7242], 99.90th=[ 7439], 99.95th=[ 7701], 00:34:11.567 | 99.99th=[ 8029] 00:34:11.567 bw ( KiB/s): min=14432, max=15024, per=24.67%, avg=14769.78, stdev=209.30, samples=9 00:34:11.567 iops : min= 1804, max= 1878, avg=1846.22, stdev=26.16, samples=9 00:34:11.567 lat (usec) : 1000=0.10% 00:34:11.567 lat (msec) : 2=0.68%, 4=20.74%, 10=78.48% 00:34:11.567 cpu : usr=95.38%, sys=4.16%, ctx=9, majf=0, minf=9 00:34:11.567 IO depths : 1=0.2%, 2=13.9%, 4=58.5%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.567 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.567 issued rwts: total=9242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.567 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:11.567 00:34:11.567 Run status group 0 (all jobs): 00:34:11.567 READ: bw=58.5MiB/s (61.3MB/s), 14.3MiB/s-14.9MiB/s (15.0MB/s-15.6MB/s), io=292MiB (307MB), run=5001-5002msec 00:34:11.567 11:01:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:11.567 11:01:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:11.567 11:01:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:11.567 11:01:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:11.567 11:01:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:11.567 11:01:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:11.567 11:01:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.567 11:01:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.567 11:01:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.567 11:01:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:11.567 11:01:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.567 11:01:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.567 11:01:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.567 11:01:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:11.567 11:01:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:11.567 11:01:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:11.567 11:01:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:11.568 11:01:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.568 11:01:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.568 11:01:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.568 11:01:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:11.568 11:01:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.568 11:01:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.568 11:01:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.568 00:34:11.568 real 0m24.503s 00:34:11.568 user 4m31.827s 00:34:11.568 sys 0m6.854s 00:34:11.568 11:01:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:11.568 11:01:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.568 ************************************ 00:34:11.568 END TEST fio_dif_rand_params 00:34:11.568 ************************************ 00:34:11.568 11:01:59 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:11.568 11:01:59 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:11.568 11:01:59 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:11.568 11:01:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:11.568 ************************************ 00:34:11.568 START TEST fio_dif_digest 00:34:11.568 ************************************ 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:11.568 bdev_null0 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:11.568 [2024-11-19 11:01:59.175595] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:11.568 { 00:34:11.568 "params": { 00:34:11.568 "name": "Nvme$subsystem", 00:34:11.568 "trtype": "$TEST_TRANSPORT", 00:34:11.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:11.568 "adrfam": "ipv4", 00:34:11.568 "trsvcid": "$NVMF_PORT", 00:34:11.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:11.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:11.568 "hdgst": ${hdgst:-false}, 00:34:11.568 "ddgst": ${ddgst:-false} 00:34:11.568 }, 00:34:11.568 "method": "bdev_nvme_attach_controller" 00:34:11.568 } 00:34:11.568 EOF 00:34:11.568 )") 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:11.568 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:11.569 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:11.569 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:11.569 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:34:11.569 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:11.569 11:01:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:34:11.569 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:11.569 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:11.569 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:34:11.569 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:11.569 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:11.569 11:01:59 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:11.569 11:01:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:34:11.569 11:01:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:34:11.569 11:01:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:11.569 "params": { 00:34:11.569 "name": "Nvme0", 00:34:11.569 "trtype": "tcp", 00:34:11.569 "traddr": "10.0.0.2", 00:34:11.569 "adrfam": "ipv4", 00:34:11.569 "trsvcid": "4420", 00:34:11.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:11.569 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:11.569 "hdgst": true, 00:34:11.569 "ddgst": true 00:34:11.569 }, 00:34:11.569 "method": "bdev_nvme_attach_controller" 00:34:11.569 }' 00:34:11.826 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:11.826 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:11.826 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:11.826 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:11.826 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:11.826 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:11.826 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:11.826 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:11.826 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:11.826 11:01:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:11.826 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:11.826 ... 00:34:11.826 fio-3.35 00:34:11.826 Starting 3 threads 00:34:24.014 00:34:24.014 filename0: (groupid=0, jobs=1): err= 0: pid=1524806: Tue Nov 19 11:02:10 2024 00:34:24.014 read: IOPS=200, BW=25.1MiB/s (26.3MB/s)(252MiB/10046msec) 00:34:24.014 slat (nsec): min=4861, max=41515, avg=15005.06, stdev=3832.53 00:34:24.014 clat (usec): min=11881, max=52661, avg=14902.36, stdev=1566.65 00:34:24.014 lat (usec): min=11893, max=52675, avg=14917.36, stdev=1566.64 00:34:24.014 clat percentiles (usec): 00:34:24.014 | 1.00th=[12649], 5.00th=[13173], 10.00th=[13566], 20.00th=[13960], 00:34:24.014 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:34:24.014 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16188], 95.00th=[16581], 00:34:24.014 | 99.00th=[17695], 99.50th=[17957], 99.90th=[19268], 99.95th=[51119], 00:34:24.014 | 99.99th=[52691] 00:34:24.014 bw ( KiB/s): min=24576, max=26624, per=33.68%, avg=25794.50, stdev=554.38, samples=20 00:34:24.014 iops : min= 192, max= 208, avg=201.50, stdev= 4.35, samples=20 00:34:24.014 lat (msec) : 20=99.90%, 100=0.10% 00:34:24.014 cpu : usr=95.21%, sys=4.31%, ctx=18, majf=0, minf=149 00:34:24.014 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:24.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.014 issued rwts: total=2017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.014 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:24.014 filename0: (groupid=0, jobs=1): err= 0: pid=1524807: Tue Nov 19 11:02:10 2024 00:34:24.014 read: IOPS=208, BW=26.1MiB/s (27.4MB/s)(261MiB/10007msec) 00:34:24.014 slat (nsec): min=4741, max=60265, avg=19510.92, stdev=5696.99 00:34:24.014 clat (usec): min=8287, max=21425, avg=14345.05, stdev=1068.97 00:34:24.014 lat (usec): min=8305, max=21452, avg=14364.56, stdev=1068.83 00:34:24.014 clat percentiles (usec): 00:34:24.014 | 1.00th=[11863], 5.00th=[12649], 10.00th=[12911], 20.00th=[13435], 00:34:24.014 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:34:24.014 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15664], 95.00th=[15926], 00:34:24.014 | 99.00th=[16909], 99.50th=[17171], 99.90th=[21365], 99.95th=[21365], 00:34:24.014 | 99.99th=[21365] 00:34:24.014 bw ( KiB/s): min=25856, max=27648, per=34.88%, avg=26713.60, stdev=486.26, samples=20 00:34:24.014 iops : min= 202, max= 216, avg=208.70, stdev= 3.80, samples=20 00:34:24.014 lat (msec) : 10=0.05%, 20=99.81%, 50=0.14% 00:34:24.014 cpu : usr=94.98%, sys=4.50%, ctx=17, majf=0, minf=151 00:34:24.014 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:24.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.014 issued rwts: total=2089,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.014 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:24.014 filename0: (groupid=0, jobs=1): err= 0: pid=1524808: Tue Nov 19 11:02:10 2024 00:34:24.014 read: IOPS=189, BW=23.7MiB/s (24.8MB/s)(238MiB/10046msec) 00:34:24.014 slat (usec): min=5, max=123, avg=15.06, stdev= 4.41 00:34:24.014 clat (usec): min=12701, max=53635, avg=15787.06, stdev=1541.82 00:34:24.014 lat (usec): min=12719, max=53647, avg=15802.12, stdev=1541.73 00:34:24.014 clat percentiles (usec): 00:34:24.014 | 1.00th=[13435], 5.00th=[14091], 10.00th=[14484], 20.00th=[14877], 00:34:24.014 | 30.00th=[15270], 40.00th=[15401], 50.00th=[15664], 60.00th=[15926], 00:34:24.014 | 70.00th=[16188], 80.00th=[16581], 90.00th=[17171], 95.00th=[17433], 00:34:24.014 | 99.00th=[18482], 99.50th=[18744], 99.90th=[47973], 99.95th=[53740], 00:34:24.014 | 99.99th=[53740] 00:34:24.014 bw ( KiB/s): min=22738, max=24832, per=31.79%, avg=24343.30, stdev=477.21, samples=20 00:34:24.014 iops : min= 177, max= 194, avg=190.15, stdev= 3.84, samples=20 00:34:24.014 lat (msec) : 20=99.74%, 50=0.21%, 100=0.05% 00:34:24.014 cpu : usr=95.35%, sys=4.17%, ctx=25, majf=0, minf=202 00:34:24.014 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:24.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.015 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.015 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:24.015 00:34:24.015 Run status group 0 (all jobs): 00:34:24.015 READ: bw=74.8MiB/s (78.4MB/s), 23.7MiB/s-26.1MiB/s (24.8MB/s-27.4MB/s), io=751MiB (788MB), run=10007-10046msec 00:34:24.015 11:02:10 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:24.015 11:02:10 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:24.015 11:02:10 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:24.015 11:02:10 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:24.015 11:02:10 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:24.015 11:02:10 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:24.015 11:02:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.015 11:02:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:24.015 11:02:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.015 11:02:10 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:24.015 11:02:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.015 11:02:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:24.015 11:02:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.015 00:34:24.015 real 0m11.106s 00:34:24.015 user 0m29.817s 00:34:24.015 sys 0m1.576s 00:34:24.015 11:02:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:24.015 11:02:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:24.015 ************************************ 00:34:24.015 END TEST fio_dif_digest 00:34:24.015 ************************************ 00:34:24.015 11:02:10 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:24.015 11:02:10 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:24.015 11:02:10 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:24.015 11:02:10 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:24.015 11:02:10 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:24.015 11:02:10 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:24.015 11:02:10 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:24.015 11:02:10 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:24.015 rmmod nvme_tcp 00:34:24.015 rmmod nvme_fabrics 00:34:24.015 rmmod nvme_keyring 00:34:24.015 11:02:10 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:24.015 11:02:10 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:24.015 11:02:10 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:24.015 11:02:10 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1518673 ']' 00:34:24.015 11:02:10 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1518673 00:34:24.015 11:02:10 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1518673 ']' 00:34:24.015 11:02:10 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1518673 00:34:24.015 11:02:10 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:24.015 11:02:10 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:24.015 11:02:10 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1518673 00:34:24.015 11:02:10 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:24.015 11:02:10 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:24.015 11:02:10 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1518673' 00:34:24.015 killing process with pid 1518673 00:34:24.015 11:02:10 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1518673 00:34:24.015 11:02:10 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1518673 00:34:24.015 11:02:10 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:24.015 11:02:10 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:24.015 Waiting for block devices as requested 00:34:24.273 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:24.273 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:24.273 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:24.531 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:24.531 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:24.531 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:24.531 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:24.790 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:24.790 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:34:24.790 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:25.048 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:25.048 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:25.048 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:25.308 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:25.308 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:25.308 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:25.308 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:25.566 11:02:13 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:25.566 11:02:13 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:25.566 11:02:13 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:25.566 11:02:13 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:25.566 11:02:13 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:25.566 11:02:13 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:25.566 11:02:13 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:25.566 11:02:13 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:25.566 11:02:13 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:25.566 11:02:13 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:25.566 11:02:13 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.468 11:02:15 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:27.468 00:34:27.468 real 1m7.538s 00:34:27.468 user 6m29.829s 00:34:27.468 sys 0m18.031s 00:34:27.468 11:02:15 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:27.468 11:02:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:27.468 ************************************ 00:34:27.468 END TEST nvmf_dif 00:34:27.468 ************************************ 00:34:27.727 11:02:15 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:27.727 11:02:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:27.727 11:02:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:27.727 11:02:15 -- common/autotest_common.sh@10 -- # set +x 00:34:27.727 ************************************ 00:34:27.727 START TEST nvmf_abort_qd_sizes 00:34:27.727 ************************************ 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:27.727 * Looking for test storage... 00:34:27.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:27.727 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:27.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.728 --rc genhtml_branch_coverage=1 00:34:27.728 --rc genhtml_function_coverage=1 00:34:27.728 --rc genhtml_legend=1 00:34:27.728 --rc geninfo_all_blocks=1 00:34:27.728 --rc geninfo_unexecuted_blocks=1 00:34:27.728 00:34:27.728 ' 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:27.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.728 --rc genhtml_branch_coverage=1 00:34:27.728 --rc genhtml_function_coverage=1 00:34:27.728 --rc genhtml_legend=1 00:34:27.728 --rc geninfo_all_blocks=1 00:34:27.728 --rc geninfo_unexecuted_blocks=1 00:34:27.728 00:34:27.728 ' 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:27.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.728 --rc genhtml_branch_coverage=1 00:34:27.728 --rc genhtml_function_coverage=1 00:34:27.728 --rc genhtml_legend=1 00:34:27.728 --rc geninfo_all_blocks=1 00:34:27.728 --rc geninfo_unexecuted_blocks=1 00:34:27.728 00:34:27.728 ' 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:27.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.728 --rc genhtml_branch_coverage=1 00:34:27.728 --rc genhtml_function_coverage=1 00:34:27.728 --rc genhtml_legend=1 00:34:27.728 --rc geninfo_all_blocks=1 00:34:27.728 --rc geninfo_unexecuted_blocks=1 00:34:27.728 00:34:27.728 ' 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:27.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:27.728 11:02:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:30.261 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:30.261 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:30.261 Found net devices under 0000:09:00.0: cvl_0_0 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.261 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:30.262 Found net devices under 0000:09:00.1: cvl_0_1 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:30.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:30.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:34:30.262 00:34:30.262 --- 10.0.0.2 ping statistics --- 00:34:30.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.262 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:30.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:30.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:34:30.262 00:34:30.262 --- 10.0.0.1 ping statistics --- 00:34:30.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.262 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:30.262 11:02:17 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:31.197 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:31.197 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:31.197 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:31.197 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:31.197 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:31.197 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:31.197 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:31.197 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:31.197 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:31.197 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:31.197 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:31.197 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:31.197 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:31.197 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:31.197 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:31.456 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:32.392 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:34:32.392 11:02:19 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:32.392 11:02:19 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:32.392 11:02:19 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:32.392 11:02:19 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:32.392 11:02:19 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:32.392 11:02:19 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:32.392 11:02:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:32.392 11:02:19 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:32.392 11:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:32.392 11:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:32.392 11:02:19 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1529731 00:34:32.392 11:02:19 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:32.392 11:02:19 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1529731 00:34:32.392 11:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1529731 ']' 00:34:32.392 11:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:32.393 11:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:32.393 11:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:32.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:32.393 11:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:32.393 11:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:32.393 [2024-11-19 11:02:20.008698] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:34:32.393 [2024-11-19 11:02:20.008782] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:32.693 [2024-11-19 11:02:20.084419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:32.693 [2024-11-19 11:02:20.146482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:32.693 [2024-11-19 11:02:20.146536] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:32.693 [2024-11-19 11:02:20.146550] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:32.693 [2024-11-19 11:02:20.146561] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:32.693 [2024-11-19 11:02:20.146571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:32.693 [2024-11-19 11:02:20.150322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:32.693 [2024-11-19 11:02:20.150388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:32.693 [2024-11-19 11:02:20.150457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:32.693 [2024-11-19 11:02:20.150461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:0b:00.0 ]] 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:0b:00.0 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:32.693 11:02:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:32.693 ************************************ 00:34:32.693 START TEST spdk_target_abort 00:34:32.693 ************************************ 00:34:32.693 11:02:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:32.693 11:02:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:32.693 11:02:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:34:32.693 11:02:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.693 11:02:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:35.974 spdk_targetn1 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:35.974 [2024-11-19 11:02:23.157349] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:35.974 [2024-11-19 11:02:23.202670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:35.974 11:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:39.251 Initializing NVMe Controllers 00:34:39.251 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:39.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:39.251 Initialization complete. Launching workers. 00:34:39.251 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12891, failed: 0 00:34:39.251 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1222, failed to submit 11669 00:34:39.251 success 762, unsuccessful 460, failed 0 00:34:39.251 11:02:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:39.251 11:02:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:42.589 Initializing NVMe Controllers 00:34:42.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:42.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:42.589 Initialization complete. Launching workers. 00:34:42.589 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8973, failed: 0 00:34:42.589 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1235, failed to submit 7738 00:34:42.589 success 327, unsuccessful 908, failed 0 00:34:42.589 11:02:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:42.589 11:02:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:45.867 Initializing NVMe Controllers 00:34:45.867 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:45.867 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:45.867 Initialization complete. Launching workers. 00:34:45.867 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30491, failed: 0 00:34:45.867 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2527, failed to submit 27964 00:34:45.867 success 511, unsuccessful 2016, failed 0 00:34:45.867 11:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:45.867 11:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.867 11:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:45.867 11:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.867 11:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:45.867 11:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.867 11:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:46.799 11:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.799 11:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1529731 00:34:46.799 11:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1529731 ']' 00:34:46.799 11:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1529731 00:34:46.799 11:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:34:46.799 11:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:46.799 11:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1529731 00:34:47.057 11:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:47.057 11:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:47.057 11:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1529731' 00:34:47.057 killing process with pid 1529731 00:34:47.057 11:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1529731 00:34:47.057 11:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1529731 00:34:47.057 00:34:47.057 real 0m14.346s 00:34:47.057 user 0m54.170s 00:34:47.057 sys 0m2.778s 00:34:47.057 11:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:47.057 11:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:47.057 ************************************ 00:34:47.057 END TEST spdk_target_abort 00:34:47.057 ************************************ 00:34:47.317 11:02:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:47.317 11:02:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:47.317 11:02:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:47.317 11:02:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:47.317 ************************************ 00:34:47.317 START TEST kernel_target_abort 00:34:47.317 ************************************ 00:34:47.317 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:34:47.317 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:47.317 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:47.317 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.317 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.317 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.318 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.318 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.318 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.318 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.318 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.318 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.318 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:47.318 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:47.318 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:47.318 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:47.318 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:47.318 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:47.318 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:47.318 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:47.318 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:47.318 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:47.318 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:48.255 Waiting for block devices as requested 00:34:48.513 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:48.513 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:48.513 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:48.771 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:48.771 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:48.771 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:48.771 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:48.771 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:49.029 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:34:49.029 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:49.288 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:49.288 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:49.288 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:49.288 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:49.547 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:49.547 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:49.547 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:49.806 No valid GPT data, bailing 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:34:49.806 00:34:49.806 Discovery Log Number of Records 2, Generation counter 2 00:34:49.806 =====Discovery Log Entry 0====== 00:34:49.806 trtype: tcp 00:34:49.806 adrfam: ipv4 00:34:49.806 subtype: current discovery subsystem 00:34:49.806 treq: not specified, sq flow control disable supported 00:34:49.806 portid: 1 00:34:49.806 trsvcid: 4420 00:34:49.806 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:49.806 traddr: 10.0.0.1 00:34:49.806 eflags: none 00:34:49.806 sectype: none 00:34:49.806 =====Discovery Log Entry 1====== 00:34:49.806 trtype: tcp 00:34:49.806 adrfam: ipv4 00:34:49.806 subtype: nvme subsystem 00:34:49.806 treq: not specified, sq flow control disable supported 00:34:49.806 portid: 1 00:34:49.806 trsvcid: 4420 00:34:49.806 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:49.806 traddr: 10.0.0.1 00:34:49.806 eflags: none 00:34:49.806 sectype: none 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:49.806 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:53.087 Initializing NVMe Controllers 00:34:53.087 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:53.087 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:53.087 Initialization complete. Launching workers. 00:34:53.087 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 48304, failed: 0 00:34:53.087 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 48304, failed to submit 0 00:34:53.087 success 0, unsuccessful 48304, failed 0 00:34:53.087 11:02:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:53.087 11:02:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:56.371 Initializing NVMe Controllers 00:34:56.371 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:56.371 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:56.371 Initialization complete. Launching workers. 00:34:56.371 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95361, failed: 0 00:34:56.371 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21518, failed to submit 73843 00:34:56.371 success 0, unsuccessful 21518, failed 0 00:34:56.371 11:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:56.371 11:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:59.651 Initializing NVMe Controllers 00:34:59.651 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:59.651 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:59.651 Initialization complete. Launching workers. 00:34:59.651 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 89669, failed: 0 00:34:59.651 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22414, failed to submit 67255 00:34:59.651 success 0, unsuccessful 22414, failed 0 00:34:59.651 11:02:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:59.651 11:02:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:59.651 11:02:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:34:59.651 11:02:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:59.651 11:02:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:59.651 11:02:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:59.651 11:02:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:59.651 11:02:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:59.651 11:02:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:59.651 11:02:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:00.592 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:00.592 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:00.592 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:00.592 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:00.592 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:00.592 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:00.592 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:00.592 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:00.592 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:00.592 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:00.592 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:00.592 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:00.592 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:00.592 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:00.592 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:00.592 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:01.528 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:35:01.528 00:35:01.528 real 0m14.441s 00:35:01.528 user 0m6.138s 00:35:01.528 sys 0m3.501s 00:35:01.528 11:02:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:01.528 11:02:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:01.528 ************************************ 00:35:01.528 END TEST kernel_target_abort 00:35:01.528 ************************************ 00:35:01.788 11:02:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:01.788 11:02:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:01.788 11:02:49 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:01.788 11:02:49 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:01.788 11:02:49 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:01.788 11:02:49 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:01.788 11:02:49 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:01.788 11:02:49 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:01.788 rmmod nvme_tcp 00:35:01.788 rmmod nvme_fabrics 00:35:01.788 rmmod nvme_keyring 00:35:01.788 11:02:49 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:01.788 11:02:49 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:01.788 11:02:49 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:01.788 11:02:49 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1529731 ']' 00:35:01.788 11:02:49 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1529731 00:35:01.788 11:02:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1529731 ']' 00:35:01.788 11:02:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1529731 00:35:01.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1529731) - No such process 00:35:01.789 11:02:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1529731 is not found' 00:35:01.789 Process with pid 1529731 is not found 00:35:01.789 11:02:49 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:01.789 11:02:49 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:02.727 Waiting for block devices as requested 00:35:02.986 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:02.986 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:02.986 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:03.245 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:03.245 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:03.245 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:03.245 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:03.505 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:03.505 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:35:03.764 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:03.764 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:03.764 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:03.764 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:04.024 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:04.024 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:04.024 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:04.024 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:04.284 11:02:51 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:04.284 11:02:51 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:04.284 11:02:51 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:04.284 11:02:51 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:04.284 11:02:51 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:04.284 11:02:51 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:04.284 11:02:51 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:04.284 11:02:51 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:04.284 11:02:51 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.284 11:02:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:04.284 11:02:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.186 11:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:06.186 00:35:06.186 real 0m38.616s 00:35:06.186 user 1m2.500s 00:35:06.186 sys 0m10.025s 00:35:06.186 11:02:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:06.186 11:02:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:06.186 ************************************ 00:35:06.186 END TEST nvmf_abort_qd_sizes 00:35:06.186 ************************************ 00:35:06.186 11:02:53 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:06.186 11:02:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:06.186 11:02:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:06.186 11:02:53 -- common/autotest_common.sh@10 -- # set +x 00:35:06.186 ************************************ 00:35:06.186 START TEST keyring_file 00:35:06.186 ************************************ 00:35:06.186 11:02:53 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:06.445 * Looking for test storage... 00:35:06.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:06.445 11:02:53 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:06.445 11:02:53 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:35:06.445 11:02:53 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:06.445 11:02:53 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:06.445 11:02:53 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:06.445 11:02:53 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:06.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.445 --rc genhtml_branch_coverage=1 00:35:06.445 --rc genhtml_function_coverage=1 00:35:06.445 --rc genhtml_legend=1 00:35:06.445 --rc geninfo_all_blocks=1 00:35:06.445 --rc geninfo_unexecuted_blocks=1 00:35:06.445 00:35:06.445 ' 00:35:06.445 11:02:53 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:06.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.445 --rc genhtml_branch_coverage=1 00:35:06.445 --rc genhtml_function_coverage=1 00:35:06.445 --rc genhtml_legend=1 00:35:06.445 --rc geninfo_all_blocks=1 00:35:06.445 --rc geninfo_unexecuted_blocks=1 00:35:06.445 00:35:06.445 ' 00:35:06.445 11:02:53 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:06.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.445 --rc genhtml_branch_coverage=1 00:35:06.445 --rc genhtml_function_coverage=1 00:35:06.445 --rc genhtml_legend=1 00:35:06.445 --rc geninfo_all_blocks=1 00:35:06.445 --rc geninfo_unexecuted_blocks=1 00:35:06.445 00:35:06.445 ' 00:35:06.445 11:02:53 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:06.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.445 --rc genhtml_branch_coverage=1 00:35:06.445 --rc genhtml_function_coverage=1 00:35:06.445 --rc genhtml_legend=1 00:35:06.445 --rc geninfo_all_blocks=1 00:35:06.445 --rc geninfo_unexecuted_blocks=1 00:35:06.445 00:35:06.445 ' 00:35:06.445 11:02:53 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:06.445 11:02:53 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:06.445 11:02:53 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:06.445 11:02:53 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:06.445 11:02:53 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:06.445 11:02:53 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:06.445 11:02:53 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:06.445 11:02:53 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:06.445 11:02:53 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:06.445 11:02:53 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:06.445 11:02:53 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:06.445 11:02:53 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:06.445 11:02:53 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:06.445 11:02:53 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:06.445 11:02:53 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:06.445 11:02:53 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:06.445 11:02:53 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:06.445 11:02:53 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:06.445 11:02:53 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:06.445 11:02:53 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:06.445 11:02:53 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:06.445 11:02:53 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.445 11:02:53 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.445 11:02:53 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.445 11:02:53 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:06.446 11:02:53 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.446 11:02:53 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:06.446 11:02:53 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:06.446 11:02:53 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:06.446 11:02:53 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:06.446 11:02:53 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:06.446 11:02:53 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:06.446 11:02:53 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:06.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:06.446 11:02:53 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:06.446 11:02:53 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:06.446 11:02:53 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:06.446 11:02:53 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:06.446 11:02:53 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:06.446 11:02:53 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:06.446 11:02:53 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:06.446 11:02:53 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:06.446 11:02:53 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:06.446 11:02:53 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:06.446 11:02:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:06.446 11:02:53 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:06.446 11:02:53 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:06.446 11:02:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:06.446 11:02:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:06.446 11:02:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vEnPXcet1P 00:35:06.446 11:02:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:06.446 11:02:53 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:06.446 11:02:53 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:06.446 11:02:53 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:06.446 11:02:53 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:06.446 11:02:53 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:06.446 11:02:53 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:06.446 11:02:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vEnPXcet1P 00:35:06.446 11:02:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vEnPXcet1P 00:35:06.446 11:02:53 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.vEnPXcet1P 00:35:06.446 11:02:53 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:06.446 11:02:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:06.446 11:02:53 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:06.446 11:02:53 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:06.446 11:02:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:06.446 11:02:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:06.446 11:02:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.gbWkzoqg7v 00:35:06.446 11:02:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:06.446 11:02:53 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:06.446 11:02:53 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:06.446 11:02:53 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:06.446 11:02:53 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:06.446 11:02:53 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:06.446 11:02:53 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:06.446 11:02:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.gbWkzoqg7v 00:35:06.446 11:02:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.gbWkzoqg7v 00:35:06.446 11:02:54 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.gbWkzoqg7v 00:35:06.446 11:02:54 keyring_file -- keyring/file.sh@30 -- # tgtpid=1535509 00:35:06.446 11:02:54 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:06.446 11:02:54 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1535509 00:35:06.446 11:02:54 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1535509 ']' 00:35:06.446 11:02:54 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:06.446 11:02:54 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:06.446 11:02:54 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:06.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:06.446 11:02:54 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:06.446 11:02:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:06.704 [2024-11-19 11:02:54.086562] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:35:06.704 [2024-11-19 11:02:54.086655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535509 ] 00:35:06.704 [2024-11-19 11:02:54.151927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.704 [2024-11-19 11:02:54.211450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:06.962 11:02:54 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:06.962 [2024-11-19 11:02:54.492546] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:06.962 null0 00:35:06.962 [2024-11-19 11:02:54.524620] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:06.962 [2024-11-19 11:02:54.525113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.962 11:02:54 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:06.962 [2024-11-19 11:02:54.548656] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:06.962 request: 00:35:06.962 { 00:35:06.962 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:06.962 "secure_channel": false, 00:35:06.962 "listen_address": { 00:35:06.962 "trtype": "tcp", 00:35:06.962 "traddr": "127.0.0.1", 00:35:06.962 "trsvcid": "4420" 00:35:06.962 }, 00:35:06.962 "method": "nvmf_subsystem_add_listener", 00:35:06.962 "req_id": 1 00:35:06.962 } 00:35:06.962 Got JSON-RPC error response 00:35:06.962 response: 00:35:06.962 { 00:35:06.962 "code": -32602, 00:35:06.962 "message": "Invalid parameters" 00:35:06.962 } 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:06.962 11:02:54 keyring_file -- keyring/file.sh@47 -- # bperfpid=1535515 00:35:06.962 11:02:54 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:06.962 11:02:54 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1535515 /var/tmp/bperf.sock 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1535515 ']' 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:06.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:06.962 11:02:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:07.220 [2024-11-19 11:02:54.597955] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:35:07.220 [2024-11-19 11:02:54.598030] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535515 ] 00:35:07.220 [2024-11-19 11:02:54.662041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:07.220 [2024-11-19 11:02:54.718942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:07.220 11:02:54 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:07.220 11:02:54 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:07.220 11:02:54 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vEnPXcet1P 00:35:07.221 11:02:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vEnPXcet1P 00:35:07.786 11:02:55 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.gbWkzoqg7v 00:35:07.786 11:02:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.gbWkzoqg7v 00:35:07.786 11:02:55 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:07.786 11:02:55 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:07.786 11:02:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:07.786 11:02:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.786 11:02:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:08.044 11:02:55 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.vEnPXcet1P == \/\t\m\p\/\t\m\p\.\v\E\n\P\X\c\e\t\1\P ]] 00:35:08.044 11:02:55 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:08.044 11:02:55 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:08.044 11:02:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:08.044 11:02:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:08.044 11:02:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:08.610 11:02:55 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.gbWkzoqg7v == \/\t\m\p\/\t\m\p\.\g\b\W\k\z\o\q\g\7\v ]] 00:35:08.610 11:02:55 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:08.610 11:02:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:08.610 11:02:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:08.610 11:02:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:08.610 11:02:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:08.610 11:02:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:08.610 11:02:56 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:08.610 11:02:56 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:08.610 11:02:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:08.610 11:02:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:08.610 11:02:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:08.610 11:02:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:08.610 11:02:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:08.868 11:02:56 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:08.868 11:02:56 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:08.869 11:02:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:09.127 [2024-11-19 11:02:56.718257] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:09.384 nvme0n1 00:35:09.384 11:02:56 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:09.384 11:02:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:09.384 11:02:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:09.384 11:02:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:09.384 11:02:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.384 11:02:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:09.643 11:02:57 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:09.643 11:02:57 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:09.643 11:02:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:09.643 11:02:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:09.643 11:02:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:09.643 11:02:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.643 11:02:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:09.900 11:02:57 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:09.900 11:02:57 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:09.901 Running I/O for 1 seconds... 00:35:11.272 10339.00 IOPS, 40.39 MiB/s 00:35:11.272 Latency(us) 00:35:11.272 [2024-11-19T10:02:58.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.272 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:11.272 nvme0n1 : 1.01 10386.25 40.57 0.00 0.00 12281.72 5412.79 20388.98 00:35:11.272 [2024-11-19T10:02:58.895Z] =================================================================================================================== 00:35:11.272 [2024-11-19T10:02:58.896Z] Total : 10386.25 40.57 0.00 0.00 12281.72 5412.79 20388.98 00:35:11.273 { 00:35:11.273 "results": [ 00:35:11.273 { 00:35:11.273 "job": "nvme0n1", 00:35:11.273 "core_mask": "0x2", 00:35:11.273 "workload": "randrw", 00:35:11.273 "percentage": 50, 00:35:11.273 "status": "finished", 00:35:11.273 "queue_depth": 128, 00:35:11.273 "io_size": 4096, 00:35:11.273 "runtime": 1.007871, 00:35:11.273 "iops": 10386.249827606905, 00:35:11.273 "mibps": 40.571288389089474, 00:35:11.273 "io_failed": 0, 00:35:11.273 "io_timeout": 0, 00:35:11.273 "avg_latency_us": 12281.724324431423, 00:35:11.273 "min_latency_us": 5412.788148148148, 00:35:11.273 "max_latency_us": 20388.977777777778 00:35:11.273 } 00:35:11.273 ], 00:35:11.273 "core_count": 1 00:35:11.273 } 00:35:11.273 11:02:58 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:11.273 11:02:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:11.273 11:02:58 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:11.273 11:02:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:11.273 11:02:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:11.273 11:02:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:11.273 11:02:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:11.273 11:02:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.530 11:02:59 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:11.530 11:02:59 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:11.530 11:02:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:11.530 11:02:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:11.530 11:02:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:11.530 11:02:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:11.530 11:02:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.795 11:02:59 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:11.795 11:02:59 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:11.795 11:02:59 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:11.795 11:02:59 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:11.795 11:02:59 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:11.795 11:02:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:11.795 11:02:59 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:11.795 11:02:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:11.795 11:02:59 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:11.795 11:02:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:12.103 [2024-11-19 11:02:59.583374] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:12.103 [2024-11-19 11:02:59.584268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127b510 (107): Transport endpoint is not connected 00:35:12.103 [2024-11-19 11:02:59.585261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127b510 (9): Bad file descriptor 00:35:12.103 [2024-11-19 11:02:59.586261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:12.103 [2024-11-19 11:02:59.586279] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:12.103 [2024-11-19 11:02:59.586314] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:12.103 [2024-11-19 11:02:59.586330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:12.103 request: 00:35:12.103 { 00:35:12.103 "name": "nvme0", 00:35:12.103 "trtype": "tcp", 00:35:12.103 "traddr": "127.0.0.1", 00:35:12.103 "adrfam": "ipv4", 00:35:12.103 "trsvcid": "4420", 00:35:12.103 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:12.103 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:12.103 "prchk_reftag": false, 00:35:12.103 "prchk_guard": false, 00:35:12.103 "hdgst": false, 00:35:12.103 "ddgst": false, 00:35:12.103 "psk": "key1", 00:35:12.103 "allow_unrecognized_csi": false, 00:35:12.103 "method": "bdev_nvme_attach_controller", 00:35:12.103 "req_id": 1 00:35:12.103 } 00:35:12.103 Got JSON-RPC error response 00:35:12.103 response: 00:35:12.103 { 00:35:12.103 "code": -5, 00:35:12.103 "message": "Input/output error" 00:35:12.103 } 00:35:12.103 11:02:59 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:12.103 11:02:59 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:12.103 11:02:59 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:12.103 11:02:59 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:12.103 11:02:59 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:12.103 11:02:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:12.103 11:02:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:12.103 11:02:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:12.103 11:02:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:12.103 11:02:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:12.388 11:02:59 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:12.388 11:02:59 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:12.388 11:02:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:12.388 11:02:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:12.388 11:02:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:12.388 11:02:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:12.388 11:02:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:12.646 11:03:00 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:12.646 11:03:00 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:12.646 11:03:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:12.904 11:03:00 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:12.904 11:03:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:13.163 11:03:00 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:13.163 11:03:00 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:13.163 11:03:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:13.422 11:03:01 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:13.422 11:03:01 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.vEnPXcet1P 00:35:13.422 11:03:01 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.vEnPXcet1P 00:35:13.422 11:03:01 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:13.422 11:03:01 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.vEnPXcet1P 00:35:13.422 11:03:01 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:13.422 11:03:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:13.422 11:03:01 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:13.422 11:03:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:13.422 11:03:01 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vEnPXcet1P 00:35:13.422 11:03:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vEnPXcet1P 00:35:13.680 [2024-11-19 11:03:01.283503] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vEnPXcet1P': 0100660 00:35:13.680 [2024-11-19 11:03:01.283537] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:13.680 request: 00:35:13.680 { 00:35:13.680 "name": "key0", 00:35:13.680 "path": "/tmp/tmp.vEnPXcet1P", 00:35:13.680 "method": "keyring_file_add_key", 00:35:13.680 "req_id": 1 00:35:13.680 } 00:35:13.680 Got JSON-RPC error response 00:35:13.680 response: 00:35:13.680 { 00:35:13.680 "code": -1, 00:35:13.680 "message": "Operation not permitted" 00:35:13.680 } 00:35:13.938 11:03:01 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:13.938 11:03:01 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:13.938 11:03:01 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:13.938 11:03:01 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:13.938 11:03:01 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.vEnPXcet1P 00:35:13.938 11:03:01 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vEnPXcet1P 00:35:13.938 11:03:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vEnPXcet1P 00:35:14.197 11:03:01 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.vEnPXcet1P 00:35:14.197 11:03:01 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:14.197 11:03:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:14.197 11:03:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:14.197 11:03:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:14.197 11:03:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:14.197 11:03:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:14.454 11:03:01 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:14.455 11:03:01 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:14.455 11:03:01 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:14.455 11:03:01 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:14.455 11:03:01 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:14.455 11:03:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:14.455 11:03:01 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:14.455 11:03:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:14.455 11:03:01 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:14.455 11:03:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:14.712 [2024-11-19 11:03:02.133822] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.vEnPXcet1P': No such file or directory 00:35:14.712 [2024-11-19 11:03:02.133854] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:14.712 [2024-11-19 11:03:02.133890] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:14.712 [2024-11-19 11:03:02.133903] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:14.712 [2024-11-19 11:03:02.133915] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:14.712 [2024-11-19 11:03:02.133926] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:14.712 request: 00:35:14.712 { 00:35:14.712 "name": "nvme0", 00:35:14.712 "trtype": "tcp", 00:35:14.712 "traddr": "127.0.0.1", 00:35:14.712 "adrfam": "ipv4", 00:35:14.712 "trsvcid": "4420", 00:35:14.712 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:14.712 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:14.712 "prchk_reftag": false, 00:35:14.712 "prchk_guard": false, 00:35:14.712 "hdgst": false, 00:35:14.712 "ddgst": false, 00:35:14.712 "psk": "key0", 00:35:14.712 "allow_unrecognized_csi": false, 00:35:14.712 "method": "bdev_nvme_attach_controller", 00:35:14.712 "req_id": 1 00:35:14.712 } 00:35:14.712 Got JSON-RPC error response 00:35:14.712 response: 00:35:14.712 { 00:35:14.712 "code": -19, 00:35:14.712 "message": "No such device" 00:35:14.712 } 00:35:14.712 11:03:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:14.712 11:03:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:14.712 11:03:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:14.712 11:03:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:14.712 11:03:02 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:14.712 11:03:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:14.970 11:03:02 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:14.970 11:03:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:14.970 11:03:02 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:14.970 11:03:02 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:14.970 11:03:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:14.970 11:03:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:14.970 11:03:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.aANSxleBeY 00:35:14.970 11:03:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:14.970 11:03:02 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:14.970 11:03:02 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:14.970 11:03:02 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:14.970 11:03:02 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:14.970 11:03:02 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:14.970 11:03:02 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:14.970 11:03:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.aANSxleBeY 00:35:14.970 11:03:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.aANSxleBeY 00:35:14.970 11:03:02 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.aANSxleBeY 00:35:14.970 11:03:02 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.aANSxleBeY 00:35:14.970 11:03:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.aANSxleBeY 00:35:15.227 11:03:02 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:15.227 11:03:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:15.485 nvme0n1 00:35:15.485 11:03:03 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:15.485 11:03:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:15.485 11:03:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:15.485 11:03:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:15.485 11:03:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:15.485 11:03:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:16.051 11:03:03 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:16.051 11:03:03 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:16.051 11:03:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:16.051 11:03:03 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:16.051 11:03:03 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:16.051 11:03:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:16.051 11:03:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.051 11:03:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:16.614 11:03:03 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:16.614 11:03:03 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:16.614 11:03:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:16.614 11:03:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:16.614 11:03:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:16.614 11:03:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.614 11:03:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:16.614 11:03:04 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:16.614 11:03:04 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:16.615 11:03:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:17.179 11:03:04 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:17.179 11:03:04 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:17.179 11:03:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:17.179 11:03:04 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:17.179 11:03:04 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.aANSxleBeY 00:35:17.179 11:03:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.aANSxleBeY 00:35:17.437 11:03:05 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.gbWkzoqg7v 00:35:17.437 11:03:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.gbWkzoqg7v 00:35:17.694 11:03:05 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:17.694 11:03:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:18.259 nvme0n1 00:35:18.259 11:03:05 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:18.259 11:03:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:18.516 11:03:05 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:18.516 "subsystems": [ 00:35:18.516 { 00:35:18.516 "subsystem": "keyring", 00:35:18.516 "config": [ 00:35:18.516 { 00:35:18.516 "method": "keyring_file_add_key", 00:35:18.516 "params": { 00:35:18.516 "name": "key0", 00:35:18.516 "path": "/tmp/tmp.aANSxleBeY" 00:35:18.516 } 00:35:18.516 }, 00:35:18.516 { 00:35:18.516 "method": "keyring_file_add_key", 00:35:18.516 "params": { 00:35:18.516 "name": "key1", 00:35:18.516 "path": "/tmp/tmp.gbWkzoqg7v" 00:35:18.516 } 00:35:18.516 } 00:35:18.516 ] 00:35:18.516 }, 00:35:18.516 { 00:35:18.516 "subsystem": "iobuf", 00:35:18.516 "config": [ 00:35:18.516 { 00:35:18.516 "method": "iobuf_set_options", 00:35:18.516 "params": { 00:35:18.516 "small_pool_count": 8192, 00:35:18.516 "large_pool_count": 1024, 00:35:18.516 "small_bufsize": 8192, 00:35:18.516 "large_bufsize": 135168, 00:35:18.516 "enable_numa": false 00:35:18.516 } 00:35:18.516 } 00:35:18.516 ] 00:35:18.516 }, 00:35:18.516 { 00:35:18.516 "subsystem": "sock", 00:35:18.516 "config": [ 00:35:18.516 { 00:35:18.516 "method": "sock_set_default_impl", 00:35:18.516 "params": { 00:35:18.516 "impl_name": "posix" 00:35:18.516 } 00:35:18.516 }, 00:35:18.516 { 00:35:18.516 "method": "sock_impl_set_options", 00:35:18.516 "params": { 00:35:18.516 "impl_name": "ssl", 00:35:18.516 "recv_buf_size": 4096, 00:35:18.516 "send_buf_size": 4096, 00:35:18.516 "enable_recv_pipe": true, 00:35:18.516 "enable_quickack": false, 00:35:18.516 "enable_placement_id": 0, 00:35:18.516 "enable_zerocopy_send_server": true, 00:35:18.516 "enable_zerocopy_send_client": false, 00:35:18.516 "zerocopy_threshold": 0, 00:35:18.516 "tls_version": 0, 00:35:18.516 "enable_ktls": false 00:35:18.516 } 00:35:18.516 }, 00:35:18.516 { 00:35:18.516 "method": "sock_impl_set_options", 00:35:18.516 "params": { 00:35:18.516 "impl_name": "posix", 00:35:18.516 "recv_buf_size": 2097152, 00:35:18.516 "send_buf_size": 2097152, 00:35:18.516 "enable_recv_pipe": true, 00:35:18.516 "enable_quickack": false, 00:35:18.516 "enable_placement_id": 0, 00:35:18.516 "enable_zerocopy_send_server": true, 00:35:18.516 "enable_zerocopy_send_client": false, 00:35:18.516 "zerocopy_threshold": 0, 00:35:18.516 "tls_version": 0, 00:35:18.516 "enable_ktls": false 00:35:18.516 } 00:35:18.516 } 00:35:18.516 ] 00:35:18.516 }, 00:35:18.516 { 00:35:18.516 "subsystem": "vmd", 00:35:18.516 "config": [] 00:35:18.516 }, 00:35:18.516 { 00:35:18.516 "subsystem": "accel", 00:35:18.516 "config": [ 00:35:18.516 { 00:35:18.516 "method": "accel_set_options", 00:35:18.516 "params": { 00:35:18.516 "small_cache_size": 128, 00:35:18.516 "large_cache_size": 16, 00:35:18.516 "task_count": 2048, 00:35:18.516 "sequence_count": 2048, 00:35:18.516 "buf_count": 2048 00:35:18.516 } 00:35:18.516 } 00:35:18.516 ] 00:35:18.516 }, 00:35:18.516 { 00:35:18.516 "subsystem": "bdev", 00:35:18.516 "config": [ 00:35:18.516 { 00:35:18.516 "method": "bdev_set_options", 00:35:18.516 "params": { 00:35:18.516 "bdev_io_pool_size": 65535, 00:35:18.516 "bdev_io_cache_size": 256, 00:35:18.516 "bdev_auto_examine": true, 00:35:18.516 "iobuf_small_cache_size": 128, 00:35:18.516 "iobuf_large_cache_size": 16 00:35:18.516 } 00:35:18.516 }, 00:35:18.516 { 00:35:18.516 "method": "bdev_raid_set_options", 00:35:18.516 "params": { 00:35:18.516 "process_window_size_kb": 1024, 00:35:18.516 "process_max_bandwidth_mb_sec": 0 00:35:18.516 } 00:35:18.516 }, 00:35:18.516 { 00:35:18.516 "method": "bdev_iscsi_set_options", 00:35:18.516 "params": { 00:35:18.516 "timeout_sec": 30 00:35:18.516 } 00:35:18.516 }, 00:35:18.516 { 00:35:18.516 "method": "bdev_nvme_set_options", 00:35:18.516 "params": { 00:35:18.516 "action_on_timeout": "none", 00:35:18.516 "timeout_us": 0, 00:35:18.516 "timeout_admin_us": 0, 00:35:18.516 "keep_alive_timeout_ms": 10000, 00:35:18.516 "arbitration_burst": 0, 00:35:18.516 "low_priority_weight": 0, 00:35:18.516 "medium_priority_weight": 0, 00:35:18.516 "high_priority_weight": 0, 00:35:18.516 "nvme_adminq_poll_period_us": 10000, 00:35:18.516 "nvme_ioq_poll_period_us": 0, 00:35:18.516 "io_queue_requests": 512, 00:35:18.516 "delay_cmd_submit": true, 00:35:18.517 "transport_retry_count": 4, 00:35:18.517 "bdev_retry_count": 3, 00:35:18.517 "transport_ack_timeout": 0, 00:35:18.517 "ctrlr_loss_timeout_sec": 0, 00:35:18.517 "reconnect_delay_sec": 0, 00:35:18.517 "fast_io_fail_timeout_sec": 0, 00:35:18.517 "disable_auto_failback": false, 00:35:18.517 "generate_uuids": false, 00:35:18.517 "transport_tos": 0, 00:35:18.517 "nvme_error_stat": false, 00:35:18.517 "rdma_srq_size": 0, 00:35:18.517 "io_path_stat": false, 00:35:18.517 "allow_accel_sequence": false, 00:35:18.517 "rdma_max_cq_size": 0, 00:35:18.517 "rdma_cm_event_timeout_ms": 0, 00:35:18.517 "dhchap_digests": [ 00:35:18.517 "sha256", 00:35:18.517 "sha384", 00:35:18.517 "sha512" 00:35:18.517 ], 00:35:18.517 "dhchap_dhgroups": [ 00:35:18.517 "null", 00:35:18.517 "ffdhe2048", 00:35:18.517 "ffdhe3072", 00:35:18.517 "ffdhe4096", 00:35:18.517 "ffdhe6144", 00:35:18.517 "ffdhe8192" 00:35:18.517 ] 00:35:18.517 } 00:35:18.517 }, 00:35:18.517 { 00:35:18.517 "method": "bdev_nvme_attach_controller", 00:35:18.517 "params": { 00:35:18.517 "name": "nvme0", 00:35:18.517 "trtype": "TCP", 00:35:18.517 "adrfam": "IPv4", 00:35:18.517 "traddr": "127.0.0.1", 00:35:18.517 "trsvcid": "4420", 00:35:18.517 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:18.517 "prchk_reftag": false, 00:35:18.517 "prchk_guard": false, 00:35:18.517 "ctrlr_loss_timeout_sec": 0, 00:35:18.517 "reconnect_delay_sec": 0, 00:35:18.517 "fast_io_fail_timeout_sec": 0, 00:35:18.517 "psk": "key0", 00:35:18.517 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:18.517 "hdgst": false, 00:35:18.517 "ddgst": false, 00:35:18.517 "multipath": "multipath" 00:35:18.517 } 00:35:18.517 }, 00:35:18.517 { 00:35:18.517 "method": "bdev_nvme_set_hotplug", 00:35:18.517 "params": { 00:35:18.517 "period_us": 100000, 00:35:18.517 "enable": false 00:35:18.517 } 00:35:18.517 }, 00:35:18.517 { 00:35:18.517 "method": "bdev_wait_for_examine" 00:35:18.517 } 00:35:18.517 ] 00:35:18.517 }, 00:35:18.517 { 00:35:18.517 "subsystem": "nbd", 00:35:18.517 "config": [] 00:35:18.517 } 00:35:18.517 ] 00:35:18.517 }' 00:35:18.517 11:03:05 keyring_file -- keyring/file.sh@115 -- # killprocess 1535515 00:35:18.517 11:03:05 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1535515 ']' 00:35:18.517 11:03:05 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1535515 00:35:18.517 11:03:05 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:18.517 11:03:05 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:18.517 11:03:05 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1535515 00:35:18.517 11:03:05 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:18.517 11:03:05 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:18.517 11:03:05 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1535515' 00:35:18.517 killing process with pid 1535515 00:35:18.517 11:03:05 keyring_file -- common/autotest_common.sh@973 -- # kill 1535515 00:35:18.517 Received shutdown signal, test time was about 1.000000 seconds 00:35:18.517 00:35:18.517 Latency(us) 00:35:18.517 [2024-11-19T10:03:06.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.517 [2024-11-19T10:03:06.140Z] =================================================================================================================== 00:35:18.517 [2024-11-19T10:03:06.140Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:18.517 11:03:05 keyring_file -- common/autotest_common.sh@978 -- # wait 1535515 00:35:18.775 11:03:06 keyring_file -- keyring/file.sh@118 -- # bperfpid=1537166 00:35:18.775 11:03:06 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1537166 /var/tmp/bperf.sock 00:35:18.775 11:03:06 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1537166 ']' 00:35:18.775 11:03:06 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:18.775 11:03:06 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:18.775 11:03:06 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:18.775 11:03:06 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:18.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:18.775 11:03:06 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:18.775 11:03:06 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:18.775 "subsystems": [ 00:35:18.775 { 00:35:18.775 "subsystem": "keyring", 00:35:18.775 "config": [ 00:35:18.775 { 00:35:18.775 "method": "keyring_file_add_key", 00:35:18.775 "params": { 00:35:18.775 "name": "key0", 00:35:18.775 "path": "/tmp/tmp.aANSxleBeY" 00:35:18.775 } 00:35:18.775 }, 00:35:18.775 { 00:35:18.775 "method": "keyring_file_add_key", 00:35:18.775 "params": { 00:35:18.775 "name": "key1", 00:35:18.775 "path": "/tmp/tmp.gbWkzoqg7v" 00:35:18.775 } 00:35:18.775 } 00:35:18.775 ] 00:35:18.775 }, 00:35:18.775 { 00:35:18.775 "subsystem": "iobuf", 00:35:18.775 "config": [ 00:35:18.775 { 00:35:18.775 "method": "iobuf_set_options", 00:35:18.775 "params": { 00:35:18.775 "small_pool_count": 8192, 00:35:18.775 "large_pool_count": 1024, 00:35:18.775 "small_bufsize": 8192, 00:35:18.775 "large_bufsize": 135168, 00:35:18.775 "enable_numa": false 00:35:18.775 } 00:35:18.775 } 00:35:18.775 ] 00:35:18.775 }, 00:35:18.775 { 00:35:18.775 "subsystem": "sock", 00:35:18.775 "config": [ 00:35:18.775 { 00:35:18.775 "method": "sock_set_default_impl", 00:35:18.775 "params": { 00:35:18.775 "impl_name": "posix" 00:35:18.775 } 00:35:18.775 }, 00:35:18.775 { 00:35:18.775 "method": "sock_impl_set_options", 00:35:18.776 "params": { 00:35:18.776 "impl_name": "ssl", 00:35:18.776 "recv_buf_size": 4096, 00:35:18.776 "send_buf_size": 4096, 00:35:18.776 "enable_recv_pipe": true, 00:35:18.776 "enable_quickack": false, 00:35:18.776 "enable_placement_id": 0, 00:35:18.776 "enable_zerocopy_send_server": true, 00:35:18.776 "enable_zerocopy_send_client": false, 00:35:18.776 "zerocopy_threshold": 0, 00:35:18.776 "tls_version": 0, 00:35:18.776 "enable_ktls": false 00:35:18.776 } 00:35:18.776 }, 00:35:18.776 { 00:35:18.776 "method": "sock_impl_set_options", 00:35:18.776 "params": { 00:35:18.776 "impl_name": "posix", 00:35:18.776 "recv_buf_size": 2097152, 00:35:18.776 "send_buf_size": 2097152, 00:35:18.776 "enable_recv_pipe": true, 00:35:18.776 "enable_quickack": false, 00:35:18.776 "enable_placement_id": 0, 00:35:18.776 "enable_zerocopy_send_server": true, 00:35:18.776 "enable_zerocopy_send_client": false, 00:35:18.776 "zerocopy_threshold": 0, 00:35:18.776 "tls_version": 0, 00:35:18.776 "enable_ktls": false 00:35:18.776 } 00:35:18.776 } 00:35:18.776 ] 00:35:18.776 }, 00:35:18.776 { 00:35:18.776 "subsystem": "vmd", 00:35:18.776 "config": [] 00:35:18.776 }, 00:35:18.776 { 00:35:18.776 "subsystem": "accel", 00:35:18.776 "config": [ 00:35:18.776 { 00:35:18.776 "method": "accel_set_options", 00:35:18.776 "params": { 00:35:18.776 "small_cache_size": 128, 00:35:18.776 "large_cache_size": 16, 00:35:18.776 "task_count": 2048, 00:35:18.776 "sequence_count": 2048, 00:35:18.776 "buf_count": 2048 00:35:18.776 } 00:35:18.776 } 00:35:18.776 ] 00:35:18.776 }, 00:35:18.776 { 00:35:18.776 "subsystem": "bdev", 00:35:18.776 "config": [ 00:35:18.776 { 00:35:18.776 "method": "bdev_set_options", 00:35:18.776 "params": { 00:35:18.776 "bdev_io_pool_size": 65535, 00:35:18.776 "bdev_io_cache_size": 256, 00:35:18.776 "bdev_auto_examine": true, 00:35:18.776 "iobuf_small_cache_size": 128, 00:35:18.776 "iobuf_large_cache_size": 16 00:35:18.776 } 00:35:18.776 }, 00:35:18.776 { 00:35:18.776 "method": "bdev_raid_set_options", 00:35:18.776 "params": { 00:35:18.776 "process_window_size_kb": 1024, 00:35:18.776 "process_max_bandwidth_mb_sec": 0 00:35:18.776 } 00:35:18.776 }, 00:35:18.776 { 00:35:18.776 "method": "bdev_iscsi_set_options", 00:35:18.776 "params": { 00:35:18.776 "timeout_sec": 30 00:35:18.776 } 00:35:18.776 }, 00:35:18.776 { 00:35:18.776 "method": "bdev_nvme_set_options", 00:35:18.776 "params": { 00:35:18.776 "action_on_timeout": "none", 00:35:18.776 "timeout_us": 0, 00:35:18.776 "timeout_admin_us": 0, 00:35:18.776 "keep_alive_timeout_ms": 10000, 00:35:18.776 "arbitration_burst": 0, 00:35:18.776 "low_priority_weight": 0, 00:35:18.776 "medium_priority_weight": 0, 00:35:18.776 "high_priority_weight": 0, 00:35:18.776 "nvme_adminq_poll_period_us": 10000, 00:35:18.776 "nvme_ioq_poll_period_us": 0, 00:35:18.776 "io_queue_requests": 512, 00:35:18.776 "delay_cmd_submit": true, 00:35:18.776 "transport_retry_count": 4, 00:35:18.776 "bdev_retry_count": 3, 00:35:18.776 "transport_ack_timeout": 0, 00:35:18.776 "ctrlr_loss_timeout_sec": 0, 00:35:18.776 "reconnect_delay_sec": 0, 00:35:18.776 "fast_io_fail_timeout_sec": 0, 00:35:18.776 "disable_auto_failback": false, 00:35:18.776 "generate_uuids": false, 00:35:18.776 "transport_tos": 0, 00:35:18.776 "nvme_error_stat": false, 00:35:18.776 "rdma_srq_size": 0, 00:35:18.776 "io_path_stat": false, 00:35:18.776 "allow_accel_sequence": false, 00:35:18.776 "rdma_max_cq_size": 0, 00:35:18.776 "rdma_cm_event_timeout_ms": 0, 00:35:18.776 "dhchap_digests": [ 00:35:18.776 "sha256", 00:35:18.776 "sha384", 00:35:18.776 "sha512" 00:35:18.776 ], 00:35:18.776 "dhchap_dhgroups": [ 00:35:18.776 "null", 00:35:18.776 "ffdhe2048", 00:35:18.776 "ffdhe3072", 00:35:18.776 "ffdhe4096", 00:35:18.776 "ffdhe6144", 00:35:18.776 "ffdhe8192" 00:35:18.776 ] 00:35:18.776 } 00:35:18.776 }, 00:35:18.776 { 00:35:18.776 "method": "bdev_nvme_attach_controller", 00:35:18.776 "params": { 00:35:18.776 "name": "nvme0", 00:35:18.776 "trtype": "TCP", 00:35:18.776 "adrfam": "IPv4", 00:35:18.776 "traddr": "127.0.0.1", 00:35:18.776 "trsvcid": "4420", 00:35:18.776 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:18.776 "prchk_reftag": false, 00:35:18.776 "prchk_guard": false, 00:35:18.776 "ctrlr_loss_timeout_sec": 0, 00:35:18.776 "reconnect_delay_sec": 0, 00:35:18.776 "fast_io_fail_timeout_sec": 0, 00:35:18.776 "psk": "key0", 00:35:18.776 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:18.776 "hdgst": false, 00:35:18.776 "ddgst": false, 00:35:18.776 "multipath": "multipath" 00:35:18.776 } 00:35:18.776 }, 00:35:18.776 { 00:35:18.776 "method": "bdev_nvme_set_hotplug", 00:35:18.776 "params": { 00:35:18.776 "period_us": 100000, 00:35:18.776 "enable": false 00:35:18.776 } 00:35:18.776 }, 00:35:18.776 { 00:35:18.776 "method": "bdev_wait_for_examine" 00:35:18.776 } 00:35:18.776 ] 00:35:18.776 }, 00:35:18.776 { 00:35:18.776 "subsystem": "nbd", 00:35:18.776 "config": [] 00:35:18.776 } 00:35:18.776 ] 00:35:18.776 }' 00:35:18.776 11:03:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:18.776 [2024-11-19 11:03:06.240422] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:35:18.776 [2024-11-19 11:03:06.240515] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1537166 ] 00:35:18.776 [2024-11-19 11:03:06.307335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.776 [2024-11-19 11:03:06.366431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.036 [2024-11-19 11:03:06.549319] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:19.036 11:03:06 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:19.294 11:03:06 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:19.295 11:03:06 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:19.295 11:03:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:19.295 11:03:06 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:19.552 11:03:06 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:19.553 11:03:06 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:19.553 11:03:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:19.553 11:03:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:19.553 11:03:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:19.553 11:03:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:19.553 11:03:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:19.811 11:03:07 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:19.811 11:03:07 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:19.811 11:03:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:19.811 11:03:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:19.811 11:03:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:19.811 11:03:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:19.811 11:03:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:20.069 11:03:07 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:20.069 11:03:07 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:20.069 11:03:07 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:20.069 11:03:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:20.327 11:03:07 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:20.327 11:03:07 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:20.327 11:03:07 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.aANSxleBeY /tmp/tmp.gbWkzoqg7v 00:35:20.327 11:03:07 keyring_file -- keyring/file.sh@20 -- # killprocess 1537166 00:35:20.327 11:03:07 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1537166 ']' 00:35:20.327 11:03:07 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1537166 00:35:20.327 11:03:07 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:20.327 11:03:07 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:20.327 11:03:07 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1537166 00:35:20.327 11:03:07 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:20.327 11:03:07 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:20.327 11:03:07 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1537166' 00:35:20.327 killing process with pid 1537166 00:35:20.327 11:03:07 keyring_file -- common/autotest_common.sh@973 -- # kill 1537166 00:35:20.327 Received shutdown signal, test time was about 1.000000 seconds 00:35:20.327 00:35:20.327 Latency(us) 00:35:20.327 [2024-11-19T10:03:07.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:20.327 [2024-11-19T10:03:07.950Z] =================================================================================================================== 00:35:20.327 [2024-11-19T10:03:07.950Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:20.327 11:03:07 keyring_file -- common/autotest_common.sh@978 -- # wait 1537166 00:35:20.585 11:03:08 keyring_file -- keyring/file.sh@21 -- # killprocess 1535509 00:35:20.585 11:03:08 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1535509 ']' 00:35:20.585 11:03:08 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1535509 00:35:20.585 11:03:08 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:20.585 11:03:08 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:20.585 11:03:08 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1535509 00:35:20.585 11:03:08 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:20.585 11:03:08 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:20.585 11:03:08 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1535509' 00:35:20.585 killing process with pid 1535509 00:35:20.585 11:03:08 keyring_file -- common/autotest_common.sh@973 -- # kill 1535509 00:35:20.585 11:03:08 keyring_file -- common/autotest_common.sh@978 -- # wait 1535509 00:35:20.844 00:35:20.844 real 0m14.681s 00:35:20.844 user 0m37.319s 00:35:20.844 sys 0m3.334s 00:35:20.844 11:03:08 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:20.844 11:03:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:20.844 ************************************ 00:35:20.844 END TEST keyring_file 00:35:20.844 ************************************ 00:35:21.101 11:03:08 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:21.101 11:03:08 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:21.101 11:03:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:21.101 11:03:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:21.101 11:03:08 -- common/autotest_common.sh@10 -- # set +x 00:35:21.101 ************************************ 00:35:21.101 START TEST keyring_linux 00:35:21.101 ************************************ 00:35:21.101 11:03:08 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:21.101 Joined session keyring: 464726562 00:35:21.101 * Looking for test storage... 00:35:21.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:21.101 11:03:08 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:21.101 11:03:08 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:35:21.101 11:03:08 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:21.101 11:03:08 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:21.101 11:03:08 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:21.101 11:03:08 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:21.101 11:03:08 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:21.101 11:03:08 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:21.101 11:03:08 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:21.101 11:03:08 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:21.101 11:03:08 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:21.101 11:03:08 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:21.101 11:03:08 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:21.101 11:03:08 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:21.101 11:03:08 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:21.101 11:03:08 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:21.102 11:03:08 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:21.102 11:03:08 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:21.102 11:03:08 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:21.102 11:03:08 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:21.102 11:03:08 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:21.102 11:03:08 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:21.102 11:03:08 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:21.102 11:03:08 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:21.102 11:03:08 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:21.102 11:03:08 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:21.102 11:03:08 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:21.102 11:03:08 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:21.102 11:03:08 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:21.102 11:03:08 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:21.102 11:03:08 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:21.102 11:03:08 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:21.102 11:03:08 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:21.102 11:03:08 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:21.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.102 --rc genhtml_branch_coverage=1 00:35:21.102 --rc genhtml_function_coverage=1 00:35:21.102 --rc genhtml_legend=1 00:35:21.102 --rc geninfo_all_blocks=1 00:35:21.102 --rc geninfo_unexecuted_blocks=1 00:35:21.102 00:35:21.102 ' 00:35:21.102 11:03:08 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:21.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.102 --rc genhtml_branch_coverage=1 00:35:21.102 --rc genhtml_function_coverage=1 00:35:21.102 --rc genhtml_legend=1 00:35:21.102 --rc geninfo_all_blocks=1 00:35:21.102 --rc geninfo_unexecuted_blocks=1 00:35:21.102 00:35:21.102 ' 00:35:21.102 11:03:08 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:21.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.102 --rc genhtml_branch_coverage=1 00:35:21.102 --rc genhtml_function_coverage=1 00:35:21.102 --rc genhtml_legend=1 00:35:21.102 --rc geninfo_all_blocks=1 00:35:21.102 --rc geninfo_unexecuted_blocks=1 00:35:21.102 00:35:21.102 ' 00:35:21.102 11:03:08 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:21.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.102 --rc genhtml_branch_coverage=1 00:35:21.102 --rc genhtml_function_coverage=1 00:35:21.102 --rc genhtml_legend=1 00:35:21.102 --rc geninfo_all_blocks=1 00:35:21.102 --rc geninfo_unexecuted_blocks=1 00:35:21.102 00:35:21.102 ' 00:35:21.102 11:03:08 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:21.102 11:03:08 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:21.102 11:03:08 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:21.102 11:03:08 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:21.102 11:03:08 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:21.102 11:03:08 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:21.102 11:03:08 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.102 11:03:08 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.102 11:03:08 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.102 11:03:08 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:21.102 11:03:08 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:21.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:21.102 11:03:08 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:21.102 11:03:08 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:21.102 11:03:08 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:21.102 11:03:08 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:21.102 11:03:08 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:21.102 11:03:08 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:21.102 11:03:08 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:21.102 11:03:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:21.102 11:03:08 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:21.102 11:03:08 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:21.102 11:03:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:21.102 11:03:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:21.102 11:03:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:21.102 11:03:08 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:21.103 11:03:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:21.103 11:03:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:21.103 /tmp/:spdk-test:key0 00:35:21.103 11:03:08 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:21.103 11:03:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:21.103 11:03:08 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:21.103 11:03:08 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:21.103 11:03:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:21.103 11:03:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:21.103 11:03:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:21.103 11:03:08 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:21.103 11:03:08 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:21.103 11:03:08 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:21.103 11:03:08 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:21.103 11:03:08 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:21.103 11:03:08 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:21.361 11:03:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:21.361 11:03:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:21.361 /tmp/:spdk-test:key1 00:35:21.361 11:03:08 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1537975 00:35:21.361 11:03:08 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:21.361 11:03:08 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1537975 00:35:21.361 11:03:08 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1537975 ']' 00:35:21.361 11:03:08 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:21.361 11:03:08 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:21.361 11:03:08 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:21.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:21.361 11:03:08 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:21.361 11:03:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:21.361 [2024-11-19 11:03:08.800158] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:35:21.361 [2024-11-19 11:03:08.800268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1537975 ] 00:35:21.361 [2024-11-19 11:03:08.868000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.361 [2024-11-19 11:03:08.928139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:21.619 11:03:09 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:21.619 11:03:09 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:21.619 11:03:09 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:21.619 11:03:09 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.619 11:03:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:21.619 [2024-11-19 11:03:09.192932] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:21.619 null0 00:35:21.619 [2024-11-19 11:03:09.224988] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:21.619 [2024-11-19 11:03:09.225486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:21.878 11:03:09 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.878 11:03:09 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:21.878 277580062 00:35:21.878 11:03:09 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:21.878 134563907 00:35:21.878 11:03:09 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1538090 00:35:21.878 11:03:09 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:21.878 11:03:09 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1538090 /var/tmp/bperf.sock 00:35:21.878 11:03:09 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1538090 ']' 00:35:21.878 11:03:09 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:21.878 11:03:09 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:21.878 11:03:09 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:21.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:21.878 11:03:09 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:21.878 11:03:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:21.878 [2024-11-19 11:03:09.290822] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:35:21.878 [2024-11-19 11:03:09.290896] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1538090 ] 00:35:21.878 [2024-11-19 11:03:09.355768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.878 [2024-11-19 11:03:09.413157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:22.135 11:03:09 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:22.135 11:03:09 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:22.135 11:03:09 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:22.135 11:03:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:22.392 11:03:09 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:22.392 11:03:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:22.650 11:03:10 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:22.650 11:03:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:22.909 [2024-11-19 11:03:10.423131] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:22.909 nvme0n1 00:35:22.909 11:03:10 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:22.909 11:03:10 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:22.909 11:03:10 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:22.909 11:03:10 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:22.909 11:03:10 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:22.909 11:03:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:23.167 11:03:10 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:23.167 11:03:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:23.424 11:03:10 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:23.425 11:03:10 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:23.425 11:03:10 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:23.425 11:03:10 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:23.425 11:03:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:23.682 11:03:11 keyring_linux -- keyring/linux.sh@25 -- # sn=277580062 00:35:23.682 11:03:11 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:23.682 11:03:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:23.682 11:03:11 keyring_linux -- keyring/linux.sh@26 -- # [[ 277580062 == \2\7\7\5\8\0\0\6\2 ]] 00:35:23.682 11:03:11 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 277580062 00:35:23.682 11:03:11 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:23.682 11:03:11 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:23.682 Running I/O for 1 seconds... 00:35:24.615 11292.00 IOPS, 44.11 MiB/s 00:35:24.615 Latency(us) 00:35:24.615 [2024-11-19T10:03:12.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:24.615 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:24.615 nvme0n1 : 1.01 11297.67 44.13 0.00 0.00 11260.84 8107.05 19903.53 00:35:24.615 [2024-11-19T10:03:12.238Z] =================================================================================================================== 00:35:24.615 [2024-11-19T10:03:12.238Z] Total : 11297.67 44.13 0.00 0.00 11260.84 8107.05 19903.53 00:35:24.615 { 00:35:24.615 "results": [ 00:35:24.615 { 00:35:24.615 "job": "nvme0n1", 00:35:24.615 "core_mask": "0x2", 00:35:24.615 "workload": "randread", 00:35:24.615 "status": "finished", 00:35:24.615 "queue_depth": 128, 00:35:24.615 "io_size": 4096, 00:35:24.615 "runtime": 1.010916, 00:35:24.615 "iops": 11297.674584238453, 00:35:24.615 "mibps": 44.13154134468146, 00:35:24.615 "io_failed": 0, 00:35:24.615 "io_timeout": 0, 00:35:24.615 "avg_latency_us": 11260.840137109353, 00:35:24.615 "min_latency_us": 8107.045925925926, 00:35:24.615 "max_latency_us": 19903.525925925926 00:35:24.615 } 00:35:24.615 ], 00:35:24.615 "core_count": 1 00:35:24.615 } 00:35:24.615 11:03:12 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:24.615 11:03:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:24.873 11:03:12 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:24.873 11:03:12 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:24.873 11:03:12 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:24.873 11:03:12 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:24.873 11:03:12 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:24.873 11:03:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:25.439 11:03:12 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:25.439 11:03:12 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:25.439 11:03:12 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:25.439 11:03:12 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:25.439 11:03:12 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:25.439 11:03:12 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:25.439 11:03:12 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:25.439 11:03:12 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:25.439 11:03:12 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:25.439 11:03:12 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:25.439 11:03:12 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:25.439 11:03:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:25.439 [2024-11-19 11:03:13.029947] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:25.439 [2024-11-19 11:03:13.030847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa59bc0 (107): Transport endpoint is not connected 00:35:25.439 [2024-11-19 11:03:13.031839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa59bc0 (9): Bad file descriptor 00:35:25.439 [2024-11-19 11:03:13.032839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:25.439 [2024-11-19 11:03:13.032869] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:25.439 [2024-11-19 11:03:13.032898] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:25.439 [2024-11-19 11:03:13.032913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:25.439 request: 00:35:25.439 { 00:35:25.439 "name": "nvme0", 00:35:25.439 "trtype": "tcp", 00:35:25.439 "traddr": "127.0.0.1", 00:35:25.439 "adrfam": "ipv4", 00:35:25.439 "trsvcid": "4420", 00:35:25.439 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:25.439 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:25.439 "prchk_reftag": false, 00:35:25.439 "prchk_guard": false, 00:35:25.439 "hdgst": false, 00:35:25.439 "ddgst": false, 00:35:25.439 "psk": ":spdk-test:key1", 00:35:25.439 "allow_unrecognized_csi": false, 00:35:25.439 "method": "bdev_nvme_attach_controller", 00:35:25.439 "req_id": 1 00:35:25.439 } 00:35:25.439 Got JSON-RPC error response 00:35:25.439 response: 00:35:25.439 { 00:35:25.439 "code": -5, 00:35:25.439 "message": "Input/output error" 00:35:25.439 } 00:35:25.439 11:03:13 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:25.439 11:03:13 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:25.439 11:03:13 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:25.439 11:03:13 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:25.439 11:03:13 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:25.439 11:03:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:25.439 11:03:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:25.439 11:03:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:25.439 11:03:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:25.439 11:03:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:25.439 11:03:13 keyring_linux -- keyring/linux.sh@33 -- # sn=277580062 00:35:25.439 11:03:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 277580062 00:35:25.439 1 links removed 00:35:25.439 11:03:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:25.439 11:03:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:25.696 11:03:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:25.696 11:03:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:25.696 11:03:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:25.696 11:03:13 keyring_linux -- keyring/linux.sh@33 -- # sn=134563907 00:35:25.696 11:03:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 134563907 00:35:25.696 1 links removed 00:35:25.696 11:03:13 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1538090 00:35:25.696 11:03:13 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1538090 ']' 00:35:25.696 11:03:13 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1538090 00:35:25.696 11:03:13 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:25.696 11:03:13 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:25.696 11:03:13 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1538090 00:35:25.697 11:03:13 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:25.697 11:03:13 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:25.697 11:03:13 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1538090' 00:35:25.697 killing process with pid 1538090 00:35:25.697 11:03:13 keyring_linux -- common/autotest_common.sh@973 -- # kill 1538090 00:35:25.697 Received shutdown signal, test time was about 1.000000 seconds 00:35:25.697 00:35:25.697 Latency(us) 00:35:25.697 [2024-11-19T10:03:13.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.697 [2024-11-19T10:03:13.320Z] =================================================================================================================== 00:35:25.697 [2024-11-19T10:03:13.320Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:25.697 11:03:13 keyring_linux -- common/autotest_common.sh@978 -- # wait 1538090 00:35:25.954 11:03:13 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1537975 00:35:25.954 11:03:13 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1537975 ']' 00:35:25.954 11:03:13 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1537975 00:35:25.954 11:03:13 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:25.954 11:03:13 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:25.954 11:03:13 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1537975 00:35:25.954 11:03:13 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:25.954 11:03:13 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:25.955 11:03:13 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1537975' 00:35:25.955 killing process with pid 1537975 00:35:25.955 11:03:13 keyring_linux -- common/autotest_common.sh@973 -- # kill 1537975 00:35:25.955 11:03:13 keyring_linux -- common/autotest_common.sh@978 -- # wait 1537975 00:35:26.213 00:35:26.213 real 0m5.274s 00:35:26.213 user 0m10.489s 00:35:26.213 sys 0m1.643s 00:35:26.213 11:03:13 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:26.213 11:03:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:26.213 ************************************ 00:35:26.213 END TEST keyring_linux 00:35:26.213 ************************************ 00:35:26.213 11:03:13 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:26.213 11:03:13 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:26.213 11:03:13 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:26.213 11:03:13 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:26.213 11:03:13 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:26.213 11:03:13 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:26.213 11:03:13 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:26.213 11:03:13 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:26.213 11:03:13 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:26.213 11:03:13 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:26.213 11:03:13 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:26.213 11:03:13 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:26.213 11:03:13 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:26.213 11:03:13 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:26.213 11:03:13 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:26.213 11:03:13 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:26.213 11:03:13 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:26.213 11:03:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:26.213 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:35:26.213 11:03:13 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:26.213 11:03:13 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:26.213 11:03:13 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:26.213 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:35:28.114 INFO: APP EXITING 00:35:28.114 INFO: killing all VMs 00:35:28.114 INFO: killing vhost app 00:35:28.114 INFO: EXIT DONE 00:35:29.490 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:35:29.490 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:35:29.490 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:35:29.490 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:35:29.490 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:35:29.490 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:35:29.490 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:35:29.490 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:35:29.490 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:35:29.490 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:35:29.490 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:35:29.490 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:35:29.490 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:35:29.748 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:35:29.748 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:35:29.748 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:35:29.748 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:35:31.125 Cleaning 00:35:31.125 Removing: /var/run/dpdk/spdk0/config 00:35:31.125 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:31.125 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:31.125 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:31.125 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:31.125 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:31.125 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:31.125 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:31.125 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:31.125 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:31.125 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:31.125 Removing: /var/run/dpdk/spdk1/config 00:35:31.125 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:31.125 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:31.125 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:31.125 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:31.125 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:31.125 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:31.125 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:31.125 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:31.125 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:31.125 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:31.125 Removing: /var/run/dpdk/spdk2/config 00:35:31.125 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:31.125 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:31.125 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:31.125 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:31.125 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:31.125 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:31.125 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:31.125 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:31.125 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:31.125 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:31.125 Removing: /var/run/dpdk/spdk3/config 00:35:31.125 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:31.125 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:31.125 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:31.125 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:31.125 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:31.125 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:31.126 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:31.126 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:31.126 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:31.126 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:31.126 Removing: /var/run/dpdk/spdk4/config 00:35:31.126 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:31.126 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:31.126 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:31.126 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:31.126 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:31.126 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:31.126 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:31.126 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:31.126 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:31.126 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:31.126 Removing: /dev/shm/bdev_svc_trace.1 00:35:31.126 Removing: /dev/shm/nvmf_trace.0 00:35:31.126 Removing: /dev/shm/spdk_tgt_trace.pid1216215 00:35:31.126 Removing: /var/run/dpdk/spdk0 00:35:31.126 Removing: /var/run/dpdk/spdk1 00:35:31.126 Removing: /var/run/dpdk/spdk2 00:35:31.126 Removing: /var/run/dpdk/spdk3 00:35:31.126 Removing: /var/run/dpdk/spdk4 00:35:31.126 Removing: /var/run/dpdk/spdk_pid1214565 00:35:31.126 Removing: /var/run/dpdk/spdk_pid1215303 00:35:31.126 Removing: /var/run/dpdk/spdk_pid1216215 00:35:31.126 Removing: /var/run/dpdk/spdk_pid1216576 00:35:31.126 Removing: /var/run/dpdk/spdk_pid1217269 00:35:31.126 Removing: /var/run/dpdk/spdk_pid1217409 00:35:31.126 Removing: /var/run/dpdk/spdk_pid1218121 00:35:31.126 Removing: /var/run/dpdk/spdk_pid1218239 00:35:31.126 Removing: /var/run/dpdk/spdk_pid1218514 00:35:31.126 Removing: /var/run/dpdk/spdk_pid1219717 00:35:31.126 Removing: /var/run/dpdk/spdk_pid1220653 00:35:31.126 Removing: /var/run/dpdk/spdk_pid1220965 00:35:31.126 Removing: /var/run/dpdk/spdk_pid1221165 00:35:31.126 Removing: /var/run/dpdk/spdk_pid1221384 00:35:31.126 Removing: /var/run/dpdk/spdk_pid1221582 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1221746 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1222006 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1222205 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1222398 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1224884 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1225050 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1225322 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1225342 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1225652 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1225776 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1226089 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1226214 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1226384 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1226465 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1226681 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1226687 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1227184 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1227342 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1227548 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1229675 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1232299 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1240050 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1240463 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1242993 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1243220 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1245796 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1249633 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1251715 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1258137 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1263415 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1264692 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1265366 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1276372 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1278788 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1306345 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1309538 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1314107 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1318380 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1318468 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1319044 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1319707 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1320249 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1320717 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1320769 00:35:31.385 Removing: /var/run/dpdk/spdk_pid1320907 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1321040 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1321050 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1321705 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1322358 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1322896 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1323303 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1323422 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1323567 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1324461 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1325303 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1330528 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1358501 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1361424 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1362721 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1364543 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1364684 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1364825 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1364967 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1365410 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1366732 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1367468 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1367898 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1369506 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1369923 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1370376 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1372771 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1376196 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1376197 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1376198 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1378294 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1383155 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1385803 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1389577 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1390522 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1391619 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1392644 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1395579 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1398559 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1400918 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1405156 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1405162 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1408062 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1408207 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1408339 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1408718 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1408732 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1411504 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1411844 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1414513 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1416491 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1419922 00:35:31.386 Removing: /var/run/dpdk/spdk_pid1423391 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1429883 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1434474 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1434483 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1447372 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1447897 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1448306 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1448712 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1449292 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1449733 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1450230 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1450646 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1453141 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1453286 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1457088 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1457263 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1460505 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1463124 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1470752 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1471197 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1473629 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1473858 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1476484 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1480175 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1482338 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1488600 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1493805 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1495104 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1495767 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1506593 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1508848 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1510840 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1515868 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1515902 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1518799 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1520197 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1521599 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1522453 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1523870 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1524749 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1530042 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1530421 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1530822 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1532373 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1532774 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1533053 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1535509 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1535515 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1537166 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1537975 00:35:31.645 Removing: /var/run/dpdk/spdk_pid1538090 00:35:31.645 Clean 00:35:31.645 11:03:19 -- common/autotest_common.sh@1453 -- # return 0 00:35:31.645 11:03:19 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:31.645 11:03:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:31.645 11:03:19 -- common/autotest_common.sh@10 -- # set +x 00:35:31.645 11:03:19 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:31.645 11:03:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:31.645 11:03:19 -- common/autotest_common.sh@10 -- # set +x 00:35:31.645 11:03:19 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:31.645 11:03:19 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:31.645 11:03:19 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:31.645 11:03:19 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:31.645 11:03:19 -- spdk/autotest.sh@398 -- # hostname 00:35:31.645 11:03:19 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:31.904 geninfo: WARNING: invalid characters removed from testname! 00:36:04.065 11:03:50 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:07.347 11:03:54 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:10.629 11:03:57 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:13.157 11:04:00 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:16.438 11:04:03 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:19.719 11:04:06 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:22.248 11:04:09 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:22.248 11:04:09 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:22.248 11:04:09 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:22.248 11:04:09 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:22.248 11:04:09 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:22.248 11:04:09 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:22.248 + [[ -n 1143982 ]] 00:36:22.248 + sudo kill 1143982 00:36:22.259 [Pipeline] } 00:36:22.276 [Pipeline] // stage 00:36:22.282 [Pipeline] } 00:36:22.298 [Pipeline] // timeout 00:36:22.304 [Pipeline] } 00:36:22.320 [Pipeline] // catchError 00:36:22.325 [Pipeline] } 00:36:22.341 [Pipeline] // wrap 00:36:22.348 [Pipeline] } 00:36:22.359 [Pipeline] // catchError 00:36:22.366 [Pipeline] stage 00:36:22.367 [Pipeline] { (Epilogue) 00:36:22.376 [Pipeline] catchError 00:36:22.377 [Pipeline] { 00:36:22.386 [Pipeline] echo 00:36:22.388 Cleanup processes 00:36:22.392 [Pipeline] sh 00:36:22.677 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:22.677 1548781 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:22.692 [Pipeline] sh 00:36:22.978 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:22.978 ++ grep -v 'sudo pgrep' 00:36:22.978 ++ awk '{print $1}' 00:36:22.978 + sudo kill -9 00:36:22.978 + true 00:36:22.992 [Pipeline] sh 00:36:23.278 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:33.258 [Pipeline] sh 00:36:33.547 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:33.547 Artifacts sizes are good 00:36:33.566 [Pipeline] archiveArtifacts 00:36:33.574 Archiving artifacts 00:36:33.750 [Pipeline] sh 00:36:34.077 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:34.093 [Pipeline] cleanWs 00:36:34.104 [WS-CLEANUP] Deleting project workspace... 00:36:34.104 [WS-CLEANUP] Deferred wipeout is used... 00:36:34.111 [WS-CLEANUP] done 00:36:34.113 [Pipeline] } 00:36:34.132 [Pipeline] // catchError 00:36:34.147 [Pipeline] sh 00:36:34.433 + logger -p user.info -t JENKINS-CI 00:36:34.442 [Pipeline] } 00:36:34.456 [Pipeline] // stage 00:36:34.462 [Pipeline] } 00:36:34.477 [Pipeline] // node 00:36:34.482 [Pipeline] End of Pipeline 00:36:34.520 Finished: SUCCESS